2026-02-28 00:00:09.816212 | Job console starting 2026-02-28 00:00:09.843878 | Updating git repos 2026-02-28 00:00:09.985249 | Cloning repos into workspace 2026-02-28 00:00:10.528303 | Restoring repo states 2026-02-28 00:00:10.563121 | Merging changes 2026-02-28 00:00:10.563145 | Checking out repos 2026-02-28 00:00:11.102761 | Preparing playbooks 2026-02-28 00:00:12.320985 | Running Ansible setup 2026-02-28 00:00:21.513606 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-28 00:00:23.743108 | 2026-02-28 00:00:23.743418 | PLAY [Base pre] 2026-02-28 00:00:23.764862 | 2026-02-28 00:00:23.764964 | TASK [Setup log path fact] 2026-02-28 00:00:23.817836 | orchestrator | ok 2026-02-28 00:00:23.867640 | 2026-02-28 00:00:23.867785 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-28 00:00:23.939913 | orchestrator | ok 2026-02-28 00:00:23.972860 | 2026-02-28 00:00:23.972974 | TASK [emit-job-header : Print job information] 2026-02-28 00:00:24.069088 | # Job Information 2026-02-28 00:00:24.069260 | Ansible Version: 2.16.14 2026-02-28 00:00:24.069296 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-28 00:00:24.069330 | Pipeline: periodic-midnight 2026-02-28 00:00:24.069353 | Executor: 521e9411259a 2026-02-28 00:00:24.069374 | Triggered by: https://github.com/osism/testbed 2026-02-28 00:00:24.069396 | Event ID: da5c57b108b34da5b60920ea2a4bd68a 2026-02-28 00:00:24.077107 | 2026-02-28 00:00:24.077226 | LOOP [emit-job-header : Print node information] 2026-02-28 00:00:24.341867 | orchestrator | ok: 2026-02-28 00:00:24.342676 | orchestrator | # Node Information 2026-02-28 00:00:24.342740 | orchestrator | Inventory Hostname: orchestrator 2026-02-28 00:00:24.342768 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-28 00:00:24.342792 | orchestrator | Username: zuul-testbed06 2026-02-28 00:00:24.342814 | orchestrator | Distro: Debian 12.13 2026-02-28 00:00:24.342886 | orchestrator | Provider: static-testbed 2026-02-28 00:00:24.342913 | orchestrator | Region: 2026-02-28 00:00:24.342935 | orchestrator | Label: testbed-orchestrator 2026-02-28 00:00:24.342956 | orchestrator | Product Name: OpenStack Nova 2026-02-28 00:00:24.342977 | orchestrator | Interface IP: 81.163.193.140 2026-02-28 00:00:24.367845 | 2026-02-28 00:00:24.367957 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-28 00:00:25.464136 | orchestrator -> localhost | changed 2026-02-28 00:00:25.471776 | 2026-02-28 00:00:25.471879 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-28 00:00:27.798997 | orchestrator -> localhost | changed 2026-02-28 00:00:27.819129 | 2026-02-28 00:00:27.819234 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-28 00:00:28.319135 | orchestrator -> localhost | ok 2026-02-28 00:00:28.324751 | 2026-02-28 00:00:28.324841 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-28 00:00:28.372414 | orchestrator | ok 2026-02-28 00:00:28.419547 | orchestrator | included: /var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-28 00:00:28.453738 | 2026-02-28 00:00:28.453837 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-28 00:00:33.550196 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-28 00:00:33.550384 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/work/e667d719f59143fa93177324baaeaa58_id_rsa 2026-02-28 00:00:33.550415 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/work/e667d719f59143fa93177324baaeaa58_id_rsa.pub 2026-02-28 00:00:33.550438 | orchestrator -> localhost | The key fingerprint is: 2026-02-28 00:00:33.550458 | orchestrator -> localhost | SHA256:EDkGzJeu1AV8CUjtTDiKpYGdqR4sSmdA4EZI3CNXqZk zuul-build-sshkey 2026-02-28 00:00:33.550478 | orchestrator -> localhost | The key's randomart image is: 2026-02-28 00:00:33.550506 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-28 00:00:33.550525 | orchestrator -> localhost | |O=.*+B== . | 2026-02-28 00:00:33.550543 | orchestrator -> localhost | |*+=+*.X.+ | 2026-02-28 00:00:33.550560 | orchestrator -> localhost | |.O+.=X.+ | 2026-02-28 00:00:33.550577 | orchestrator -> localhost | |*+.E. =. | 2026-02-28 00:00:33.550593 | orchestrator -> localhost | |= +. . S | 2026-02-28 00:00:33.550617 | orchestrator -> localhost | |.. . | 2026-02-28 00:00:33.550634 | orchestrator -> localhost | | | 2026-02-28 00:00:33.550651 | orchestrator -> localhost | | | 2026-02-28 00:00:33.550668 | orchestrator -> localhost | | | 2026-02-28 00:00:33.550685 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-28 00:00:33.550724 | orchestrator -> localhost | ok: Runtime: 0:00:03.453780 2026-02-28 00:00:33.556831 | 2026-02-28 00:00:33.556916 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-28 00:00:33.617974 | orchestrator | ok 2026-02-28 00:00:33.642807 | orchestrator | included: /var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-28 00:00:33.665058 | 2026-02-28 00:00:33.665161 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-28 00:00:33.724123 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:33.731840 | 2026-02-28 00:00:33.731930 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-28 00:00:34.516819 | orchestrator | changed 2026-02-28 00:00:34.533051 | 2026-02-28 00:00:34.533151 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-28 00:00:34.960716 | orchestrator | ok 2026-02-28 00:00:34.965960 | 2026-02-28 00:00:34.966050 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-28 00:00:35.479144 | orchestrator | ok 2026-02-28 00:00:35.484349 | 2026-02-28 00:00:35.484468 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-28 00:00:36.082767 | orchestrator | ok 2026-02-28 00:00:36.101757 | 2026-02-28 00:00:36.101861 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-28 00:00:36.167059 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:36.205110 | 2026-02-28 00:00:36.205250 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-28 00:00:38.615917 | orchestrator -> localhost | changed 2026-02-28 00:00:38.659450 | 2026-02-28 00:00:38.659567 | TASK [add-build-sshkey : Add back temp key] 2026-02-28 00:00:40.552097 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/work/e667d719f59143fa93177324baaeaa58_id_rsa (zuul-build-sshkey) 2026-02-28 00:00:40.552358 | orchestrator -> localhost | ok: Runtime: 0:00:00.046275 2026-02-28 00:00:40.561807 | 2026-02-28 00:00:40.561911 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-28 00:00:41.984348 | orchestrator | ok 2026-02-28 00:00:41.994617 | 2026-02-28 00:00:41.994723 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-28 00:00:42.092124 | orchestrator | skipping: Conditional result was False 2026-02-28 00:00:42.420339 | 2026-02-28 00:00:42.420470 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-28 00:00:43.360381 | orchestrator | ok 2026-02-28 00:00:43.377458 | 2026-02-28 00:00:43.377548 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-28 00:00:43.430133 | orchestrator | ok 2026-02-28 00:00:43.453415 | 2026-02-28 00:00:43.453523 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-28 00:00:44.353589 | orchestrator -> localhost | ok 2026-02-28 00:00:44.359741 | 2026-02-28 00:00:44.359828 | TASK [validate-host : Collect information about the host] 2026-02-28 00:00:45.894756 | orchestrator | ok 2026-02-28 00:00:45.930191 | 2026-02-28 00:00:45.943003 | TASK [validate-host : Sanitize hostname] 2026-02-28 00:00:46.151372 | orchestrator | ok 2026-02-28 00:00:46.155938 | 2026-02-28 00:00:46.156035 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-28 00:00:48.705940 | orchestrator -> localhost | changed 2026-02-28 00:00:48.711389 | 2026-02-28 00:00:48.711474 | TASK [validate-host : Collect information about zuul worker] 2026-02-28 00:00:49.435011 | orchestrator | ok 2026-02-28 00:00:49.439382 | 2026-02-28 00:00:49.439465 | TASK [validate-host : Write out all zuul information for each host] 2026-02-28 00:00:52.038857 | orchestrator -> localhost | changed 2026-02-28 00:00:52.048108 | 2026-02-28 00:00:52.048191 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-28 00:00:52.415406 | orchestrator | ok 2026-02-28 00:00:52.425179 | 2026-02-28 00:00:52.425319 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-28 00:02:17.470819 | orchestrator | changed: 2026-02-28 00:02:17.472106 | orchestrator | .d..t...... src/ 2026-02-28 00:02:17.472160 | orchestrator | .d..t...... src/github.com/ 2026-02-28 00:02:17.472186 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-28 00:02:17.472208 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-28 00:02:17.472229 | orchestrator | RedHat.yml 2026-02-28 00:02:17.487940 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-28 00:02:17.487958 | orchestrator | RedHat.yml 2026-02-28 00:02:17.488013 | orchestrator | = 1.53.0"... 2026-02-28 00:02:27.689762 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-28 00:02:27.712243 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-28 00:02:27.846569 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-28 00:02:28.900639 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-28 00:02:28.964040 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-28 00:02:29.449193 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-28 00:02:29.512929 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-28 00:02:30.250729 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-28 00:02:30.250953 | orchestrator | 2026-02-28 00:02:30.251051 | orchestrator | Providers are signed by their developers. 2026-02-28 00:02:30.251069 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-28 00:02:30.251082 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-28 00:02:30.251096 | orchestrator | 2026-02-28 00:02:30.251107 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-28 00:02:30.251131 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-28 00:02:30.251184 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-28 00:02:30.251236 | orchestrator | you run "tofu init" in the future. 2026-02-28 00:02:30.251269 | orchestrator | 2026-02-28 00:02:30.251280 | orchestrator | OpenTofu has been successfully initialized! 2026-02-28 00:02:30.251290 | orchestrator | 2026-02-28 00:02:30.251300 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-28 00:02:30.251310 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-28 00:02:30.251320 | orchestrator | should now work. 2026-02-28 00:02:30.251330 | orchestrator | 2026-02-28 00:02:30.251340 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-28 00:02:30.251350 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-28 00:02:30.251360 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-28 00:02:30.463231 | orchestrator | Created and switched to workspace "ci"! 2026-02-28 00:02:30.463282 | orchestrator | 2026-02-28 00:02:30.463288 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-28 00:02:30.463295 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-28 00:02:30.463316 | orchestrator | for this configuration. 2026-02-28 00:02:30.589429 | orchestrator | ci.auto.tfvars 2026-02-28 00:02:30.593237 | orchestrator | default_custom.tf 2026-02-28 00:02:31.559400 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-28 00:02:32.148021 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-28 00:02:32.378608 | orchestrator | 2026-02-28 00:02:32.378674 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-28 00:02:32.378681 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-28 00:02:32.378686 | orchestrator | + create 2026-02-28 00:02:32.378691 | orchestrator | <= read (data resources) 2026-02-28 00:02:32.378696 | orchestrator | 2026-02-28 00:02:32.378701 | orchestrator | OpenTofu will perform the following actions: 2026-02-28 00:02:32.378714 | orchestrator | 2026-02-28 00:02:32.378718 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-28 00:02:32.378723 | orchestrator | # (config refers to values not yet known) 2026-02-28 00:02:32.378727 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-28 00:02:32.378731 | orchestrator | + checksum = (known after apply) 2026-02-28 00:02:32.378735 | orchestrator | + created_at = (known after apply) 2026-02-28 00:02:32.378738 | orchestrator | + file = (known after apply) 2026-02-28 00:02:32.378742 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.378764 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.378768 | orchestrator | + min_disk_gb = (known after apply) 2026-02-28 00:02:32.378772 | orchestrator | + min_ram_mb = (known after apply) 2026-02-28 00:02:32.378776 | orchestrator | + most_recent = true 2026-02-28 00:02:32.378780 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.378783 | orchestrator | + protected = (known after apply) 2026-02-28 00:02:32.378820 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.378827 | orchestrator | + schema = (known after apply) 2026-02-28 00:02:32.378831 | orchestrator | + size_bytes = (known after apply) 2026-02-28 00:02:32.378835 | orchestrator | + tags = (known after apply) 2026-02-28 00:02:32.378839 | orchestrator | + updated_at = (known after apply) 2026-02-28 00:02:32.378843 | orchestrator | } 2026-02-28 00:02:32.378849 | orchestrator | 2026-02-28 00:02:32.378855 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-28 00:02:32.378859 | orchestrator | # (config refers to values not yet known) 2026-02-28 00:02:32.378863 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-28 00:02:32.378867 | orchestrator | + checksum = (known after apply) 2026-02-28 00:02:32.378870 | orchestrator | + created_at = (known after apply) 2026-02-28 00:02:32.378874 | orchestrator | + file = (known after apply) 2026-02-28 00:02:32.378878 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.378882 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.378885 | orchestrator | + min_disk_gb = (known after apply) 2026-02-28 00:02:32.378889 | orchestrator | + min_ram_mb = (known after apply) 2026-02-28 00:02:32.378893 | orchestrator | + most_recent = true 2026-02-28 00:02:32.378897 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.378900 | orchestrator | + protected = (known after apply) 2026-02-28 00:02:32.378904 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.378908 | orchestrator | + schema = (known after apply) 2026-02-28 00:02:32.378912 | orchestrator | + size_bytes = (known after apply) 2026-02-28 00:02:32.378915 | orchestrator | + tags = (known after apply) 2026-02-28 00:02:32.378919 | orchestrator | + updated_at = (known after apply) 2026-02-28 00:02:32.378923 | orchestrator | } 2026-02-28 00:02:32.378962 | orchestrator | 2026-02-28 00:02:32.378968 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-28 00:02:32.378972 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-28 00:02:32.378976 | orchestrator | + content = (known after apply) 2026-02-28 00:02:32.378980 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:32.378984 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:32.378988 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:32.378992 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:32.378996 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:32.379000 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:32.379004 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:32.379008 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:32.379011 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-28 00:02:32.379015 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379019 | orchestrator | } 2026-02-28 00:02:32.379087 | orchestrator | 2026-02-28 00:02:32.379096 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-28 00:02:32.379100 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-28 00:02:32.379104 | orchestrator | + content = (known after apply) 2026-02-28 00:02:32.379108 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:32.379112 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:32.379115 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:32.379119 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:32.379123 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:32.379132 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:32.379136 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:32.379140 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:32.379148 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-28 00:02:32.379152 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379156 | orchestrator | } 2026-02-28 00:02:32.379161 | orchestrator | 2026-02-28 00:02:32.379165 | orchestrator | # local_file.inventory will be created 2026-02-28 00:02:32.379169 | orchestrator | + resource "local_file" "inventory" { 2026-02-28 00:02:32.379173 | orchestrator | + content = (known after apply) 2026-02-28 00:02:32.379177 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:32.379180 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:32.379184 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:32.379188 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:32.379192 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:32.379196 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:32.379200 | orchestrator | + directory_permission = "0777" 2026-02-28 00:02:32.379203 | orchestrator | + file_permission = "0644" 2026-02-28 00:02:32.379207 | orchestrator | + filename = "inventory.ci" 2026-02-28 00:02:32.379211 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379215 | orchestrator | } 2026-02-28 00:02:32.379249 | orchestrator | 2026-02-28 00:02:32.379254 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-28 00:02:32.379258 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-28 00:02:32.379262 | orchestrator | + content = (sensitive value) 2026-02-28 00:02:32.379265 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-28 00:02:32.379269 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-28 00:02:32.379273 | orchestrator | + content_md5 = (known after apply) 2026-02-28 00:02:32.379277 | orchestrator | + content_sha1 = (known after apply) 2026-02-28 00:02:32.379280 | orchestrator | + content_sha256 = (known after apply) 2026-02-28 00:02:32.379284 | orchestrator | + content_sha512 = (known after apply) 2026-02-28 00:02:32.379288 | orchestrator | + directory_permission = "0700" 2026-02-28 00:02:32.379292 | orchestrator | + file_permission = "0600" 2026-02-28 00:02:32.379296 | orchestrator | + filename = ".id_rsa.ci" 2026-02-28 00:02:32.379299 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379303 | orchestrator | } 2026-02-28 00:02:32.379309 | orchestrator | 2026-02-28 00:02:32.379312 | orchestrator | # null_resource.node_semaphore will be created 2026-02-28 00:02:32.379316 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-28 00:02:32.379320 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379324 | orchestrator | } 2026-02-28 00:02:32.379429 | orchestrator | 2026-02-28 00:02:32.379434 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-28 00:02:32.379438 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-28 00:02:32.379442 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379446 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379450 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379454 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379457 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379461 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-28 00:02:32.379465 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379469 | orchestrator | + size = 80 2026-02-28 00:02:32.379473 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379477 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379480 | orchestrator | } 2026-02-28 00:02:32.379486 | orchestrator | 2026-02-28 00:02:32.379490 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-28 00:02:32.379493 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:32.379497 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379501 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379505 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379512 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379516 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379520 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-28 00:02:32.379524 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379527 | orchestrator | + size = 80 2026-02-28 00:02:32.379531 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379535 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379539 | orchestrator | } 2026-02-28 00:02:32.379544 | orchestrator | 2026-02-28 00:02:32.379548 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-28 00:02:32.379551 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:32.379555 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379559 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379563 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379566 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379570 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379574 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-28 00:02:32.379578 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379581 | orchestrator | + size = 80 2026-02-28 00:02:32.379585 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379589 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379592 | orchestrator | } 2026-02-28 00:02:32.379598 | orchestrator | 2026-02-28 00:02:32.379602 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-28 00:02:32.379605 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:32.379609 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379613 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379617 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379620 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379624 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379628 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-28 00:02:32.379631 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379635 | orchestrator | + size = 80 2026-02-28 00:02:32.379641 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379645 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379649 | orchestrator | } 2026-02-28 00:02:32.379686 | orchestrator | 2026-02-28 00:02:32.379692 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-28 00:02:32.379696 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:32.379699 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379703 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379707 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379711 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379715 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379718 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-28 00:02:32.379722 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379726 | orchestrator | + size = 80 2026-02-28 00:02:32.379730 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379734 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379737 | orchestrator | } 2026-02-28 00:02:32.379776 | orchestrator | 2026-02-28 00:02:32.379797 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-28 00:02:32.379801 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:32.379805 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379809 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379813 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379821 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379825 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379828 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-28 00:02:32.379832 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379836 | orchestrator | + size = 80 2026-02-28 00:02:32.379840 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379844 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379848 | orchestrator | } 2026-02-28 00:02:32.379916 | orchestrator | 2026-02-28 00:02:32.379922 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-28 00:02:32.379926 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-28 00:02:32.379929 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379933 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379937 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379941 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.379945 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379948 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-28 00:02:32.379952 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.379956 | orchestrator | + size = 80 2026-02-28 00:02:32.379959 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.379963 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.379967 | orchestrator | } 2026-02-28 00:02:32.379972 | orchestrator | 2026-02-28 00:02:32.379976 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-28 00:02:32.379980 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.379984 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.379988 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.379992 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.379995 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.379999 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-28 00:02:32.380003 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380007 | orchestrator | + size = 20 2026-02-28 00:02:32.380011 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380015 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380019 | orchestrator | } 2026-02-28 00:02:32.380116 | orchestrator | 2026-02-28 00:02:32.380122 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-28 00:02:32.380126 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380129 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380133 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380137 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380141 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380144 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-28 00:02:32.380148 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380152 | orchestrator | + size = 20 2026-02-28 00:02:32.380156 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380160 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380163 | orchestrator | } 2026-02-28 00:02:32.380197 | orchestrator | 2026-02-28 00:02:32.380202 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-28 00:02:32.380206 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380210 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380214 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380217 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380221 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380225 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-28 00:02:32.380229 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380236 | orchestrator | + size = 20 2026-02-28 00:02:32.380240 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380244 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380248 | orchestrator | } 2026-02-28 00:02:32.380436 | orchestrator | 2026-02-28 00:02:32.380442 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-28 00:02:32.380446 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380450 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380454 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380457 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380464 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380468 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-28 00:02:32.380472 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380475 | orchestrator | + size = 20 2026-02-28 00:02:32.380479 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380483 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380487 | orchestrator | } 2026-02-28 00:02:32.380491 | orchestrator | 2026-02-28 00:02:32.380494 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-28 00:02:32.380498 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380502 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380506 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380509 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380513 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380517 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-28 00:02:32.380521 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380525 | orchestrator | + size = 20 2026-02-28 00:02:32.380528 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380532 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380536 | orchestrator | } 2026-02-28 00:02:32.380541 | orchestrator | 2026-02-28 00:02:32.380545 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-28 00:02:32.380549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380553 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380557 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380560 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380564 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380568 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-28 00:02:32.380572 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380575 | orchestrator | + size = 20 2026-02-28 00:02:32.380579 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380583 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380587 | orchestrator | } 2026-02-28 00:02:32.380590 | orchestrator | 2026-02-28 00:02:32.380594 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-28 00:02:32.380598 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380602 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380605 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380609 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380613 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380617 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-28 00:02:32.380621 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380624 | orchestrator | + size = 20 2026-02-28 00:02:32.380628 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380632 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380635 | orchestrator | } 2026-02-28 00:02:32.380641 | orchestrator | 2026-02-28 00:02:32.380645 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-28 00:02:32.380648 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380655 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380659 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380663 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380667 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380670 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-28 00:02:32.380674 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380678 | orchestrator | + size = 20 2026-02-28 00:02:32.380682 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380685 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380689 | orchestrator | } 2026-02-28 00:02:32.380739 | orchestrator | 2026-02-28 00:02:32.380747 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-28 00:02:32.380751 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-28 00:02:32.380754 | orchestrator | + attachment = (known after apply) 2026-02-28 00:02:32.380758 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.380762 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.380766 | orchestrator | + metadata = (known after apply) 2026-02-28 00:02:32.380770 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-28 00:02:32.380773 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.380777 | orchestrator | + size = 20 2026-02-28 00:02:32.380781 | orchestrator | + volume_retype_policy = "never" 2026-02-28 00:02:32.380796 | orchestrator | + volume_type = "ssd" 2026-02-28 00:02:32.380801 | orchestrator | } 2026-02-28 00:02:32.381044 | orchestrator | 2026-02-28 00:02:32.381050 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-28 00:02:32.381053 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-28 00:02:32.381057 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.381061 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.381065 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.381069 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.381073 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.381076 | orchestrator | + config_drive = true 2026-02-28 00:02:32.381083 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.381087 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.381091 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-28 00:02:32.381094 | orchestrator | + force_delete = false 2026-02-28 00:02:32.381098 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.381102 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.381106 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.381109 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.381113 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.381117 | orchestrator | + name = "testbed-manager" 2026-02-28 00:02:32.381121 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.381125 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.381128 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.381132 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.381136 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.381140 | orchestrator | + user_data = (sensitive value) 2026-02-28 00:02:32.381143 | orchestrator | 2026-02-28 00:02:32.381147 | orchestrator | + block_device { 2026-02-28 00:02:32.381151 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.381155 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.381159 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.381162 | orchestrator | + multiattach = false 2026-02-28 00:02:32.381166 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.381170 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.381177 | orchestrator | } 2026-02-28 00:02:32.381181 | orchestrator | 2026-02-28 00:02:32.381185 | orchestrator | + network { 2026-02-28 00:02:32.381189 | orchestrator | + access_network = false 2026-02-28 00:02:32.381192 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.381196 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.381200 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.381204 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.381207 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.381211 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.381215 | orchestrator | } 2026-02-28 00:02:32.381219 | orchestrator | } 2026-02-28 00:02:32.381346 | orchestrator | 2026-02-28 00:02:32.381352 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-28 00:02:32.381356 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:32.381359 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.381363 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.381367 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.381371 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.381375 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.381378 | orchestrator | + config_drive = true 2026-02-28 00:02:32.381382 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.381386 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.381390 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:32.381394 | orchestrator | + force_delete = false 2026-02-28 00:02:32.381397 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.381401 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.381405 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.381409 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.381413 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.381416 | orchestrator | + name = "testbed-node-0" 2026-02-28 00:02:32.381420 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.381424 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.381428 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.381431 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.381435 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.381439 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:32.381443 | orchestrator | 2026-02-28 00:02:32.381447 | orchestrator | + block_device { 2026-02-28 00:02:32.381450 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.381454 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.381458 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.381462 | orchestrator | + multiattach = false 2026-02-28 00:02:32.381465 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.381469 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.381473 | orchestrator | } 2026-02-28 00:02:32.381477 | orchestrator | 2026-02-28 00:02:32.381480 | orchestrator | + network { 2026-02-28 00:02:32.381484 | orchestrator | + access_network = false 2026-02-28 00:02:32.381488 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.381492 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.381495 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.381499 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.381503 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.381507 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.381510 | orchestrator | } 2026-02-28 00:02:32.381514 | orchestrator | } 2026-02-28 00:02:32.381674 | orchestrator | 2026-02-28 00:02:32.381683 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-28 00:02:32.381687 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:32.381691 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.381698 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.381702 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.381706 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.381709 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.381713 | orchestrator | + config_drive = true 2026-02-28 00:02:32.381717 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.381721 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.381724 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:32.381728 | orchestrator | + force_delete = false 2026-02-28 00:02:32.381732 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.381736 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.381740 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.381743 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.381747 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.381751 | orchestrator | + name = "testbed-node-1" 2026-02-28 00:02:32.381755 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.381758 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.381762 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.381766 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.381770 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.381779 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:32.381783 | orchestrator | 2026-02-28 00:02:32.381797 | orchestrator | + block_device { 2026-02-28 00:02:32.381801 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.381805 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.381808 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.381812 | orchestrator | + multiattach = false 2026-02-28 00:02:32.381816 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.381820 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.381824 | orchestrator | } 2026-02-28 00:02:32.381827 | orchestrator | 2026-02-28 00:02:32.381831 | orchestrator | + network { 2026-02-28 00:02:32.381835 | orchestrator | + access_network = false 2026-02-28 00:02:32.381839 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.381843 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.381846 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.381850 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.381854 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.381858 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.381862 | orchestrator | } 2026-02-28 00:02:32.381865 | orchestrator | } 2026-02-28 00:02:32.381968 | orchestrator | 2026-02-28 00:02:32.381974 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-28 00:02:32.381978 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:32.381982 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.381986 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.381990 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.381994 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.381997 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.382001 | orchestrator | + config_drive = true 2026-02-28 00:02:32.382005 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.382009 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.382012 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:32.382037 | orchestrator | + force_delete = false 2026-02-28 00:02:32.382041 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.382045 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.382049 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.382058 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.382062 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.382066 | orchestrator | + name = "testbed-node-2" 2026-02-28 00:02:32.382069 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.382073 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.382077 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.382081 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.382085 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.382088 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:32.382092 | orchestrator | 2026-02-28 00:02:32.382096 | orchestrator | + block_device { 2026-02-28 00:02:32.382100 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.382104 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.382107 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.382111 | orchestrator | + multiattach = false 2026-02-28 00:02:32.382115 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.382119 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.382123 | orchestrator | } 2026-02-28 00:02:32.382126 | orchestrator | 2026-02-28 00:02:32.382130 | orchestrator | + network { 2026-02-28 00:02:32.382134 | orchestrator | + access_network = false 2026-02-28 00:02:32.382138 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.382142 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.382145 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.382149 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.382153 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.382157 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.382161 | orchestrator | } 2026-02-28 00:02:32.382164 | orchestrator | } 2026-02-28 00:02:32.382262 | orchestrator | 2026-02-28 00:02:32.382272 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-28 00:02:32.382276 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:32.382280 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.382284 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.382288 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.382292 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.382295 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.382299 | orchestrator | + config_drive = true 2026-02-28 00:02:32.382303 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.382307 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.382310 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:32.382314 | orchestrator | + force_delete = false 2026-02-28 00:02:32.382318 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.382322 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.382325 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.382329 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.382333 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.382337 | orchestrator | + name = "testbed-node-3" 2026-02-28 00:02:32.382340 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.382344 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.382348 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.382351 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.382355 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.382359 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:32.382363 | orchestrator | 2026-02-28 00:02:32.382367 | orchestrator | + block_device { 2026-02-28 00:02:32.382370 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.382374 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.382378 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.382385 | orchestrator | + multiattach = false 2026-02-28 00:02:32.382389 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.382393 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.382397 | orchestrator | } 2026-02-28 00:02:32.382400 | orchestrator | 2026-02-28 00:02:32.382404 | orchestrator | + network { 2026-02-28 00:02:32.382408 | orchestrator | + access_network = false 2026-02-28 00:02:32.382412 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.382415 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.382419 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.382423 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.382426 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.382430 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.382434 | orchestrator | } 2026-02-28 00:02:32.382438 | orchestrator | } 2026-02-28 00:02:32.382520 | orchestrator | 2026-02-28 00:02:32.382527 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-28 00:02:32.382530 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:32.382534 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.382538 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.382542 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.382546 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.382549 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.382553 | orchestrator | + config_drive = true 2026-02-28 00:02:32.382557 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.382560 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.382564 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:32.382568 | orchestrator | + force_delete = false 2026-02-28 00:02:32.382572 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.382576 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.382580 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.382583 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.382587 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.382591 | orchestrator | + name = "testbed-node-4" 2026-02-28 00:02:32.382594 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.382598 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.382602 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.382606 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.382609 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.382613 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:32.382617 | orchestrator | 2026-02-28 00:02:32.382621 | orchestrator | + block_device { 2026-02-28 00:02:32.382625 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.382628 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.382632 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.382636 | orchestrator | + multiattach = false 2026-02-28 00:02:32.382639 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.382643 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.382647 | orchestrator | } 2026-02-28 00:02:32.382651 | orchestrator | 2026-02-28 00:02:32.382654 | orchestrator | + network { 2026-02-28 00:02:32.382658 | orchestrator | + access_network = false 2026-02-28 00:02:32.382662 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.382666 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.382669 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.382673 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.382677 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.382680 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.382684 | orchestrator | } 2026-02-28 00:02:32.382688 | orchestrator | } 2026-02-28 00:02:32.383043 | orchestrator | 2026-02-28 00:02:32.383049 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-28 00:02:32.383053 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-28 00:02:32.383056 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-28 00:02:32.383060 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-28 00:02:32.383064 | orchestrator | + all_metadata = (known after apply) 2026-02-28 00:02:32.383068 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.383071 | orchestrator | + availability_zone = "nova" 2026-02-28 00:02:32.383075 | orchestrator | + config_drive = true 2026-02-28 00:02:32.383079 | orchestrator | + created = (known after apply) 2026-02-28 00:02:32.383083 | orchestrator | + flavor_id = (known after apply) 2026-02-28 00:02:32.383086 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-28 00:02:32.383090 | orchestrator | + force_delete = false 2026-02-28 00:02:32.383094 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-28 00:02:32.383098 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383101 | orchestrator | + image_id = (known after apply) 2026-02-28 00:02:32.383105 | orchestrator | + image_name = (known after apply) 2026-02-28 00:02:32.383109 | orchestrator | + key_pair = "testbed" 2026-02-28 00:02:32.383112 | orchestrator | + name = "testbed-node-5" 2026-02-28 00:02:32.383116 | orchestrator | + power_state = "active" 2026-02-28 00:02:32.383120 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383124 | orchestrator | + security_groups = (known after apply) 2026-02-28 00:02:32.383127 | orchestrator | + stop_before_destroy = false 2026-02-28 00:02:32.383131 | orchestrator | + updated = (known after apply) 2026-02-28 00:02:32.383135 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-28 00:02:32.383138 | orchestrator | 2026-02-28 00:02:32.383142 | orchestrator | + block_device { 2026-02-28 00:02:32.383146 | orchestrator | + boot_index = 0 2026-02-28 00:02:32.383150 | orchestrator | + delete_on_termination = false 2026-02-28 00:02:32.383153 | orchestrator | + destination_type = "volume" 2026-02-28 00:02:32.383157 | orchestrator | + multiattach = false 2026-02-28 00:02:32.383161 | orchestrator | + source_type = "volume" 2026-02-28 00:02:32.383165 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.383168 | orchestrator | } 2026-02-28 00:02:32.383172 | orchestrator | 2026-02-28 00:02:32.383176 | orchestrator | + network { 2026-02-28 00:02:32.383180 | orchestrator | + access_network = false 2026-02-28 00:02:32.383183 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-28 00:02:32.383187 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-28 00:02:32.383191 | orchestrator | + mac = (known after apply) 2026-02-28 00:02:32.383195 | orchestrator | + name = (known after apply) 2026-02-28 00:02:32.383198 | orchestrator | + port = (known after apply) 2026-02-28 00:02:32.383202 | orchestrator | + uuid = (known after apply) 2026-02-28 00:02:32.383206 | orchestrator | } 2026-02-28 00:02:32.383210 | orchestrator | } 2026-02-28 00:02:32.383215 | orchestrator | 2026-02-28 00:02:32.383219 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-28 00:02:32.383223 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-28 00:02:32.383227 | orchestrator | + fingerprint = (known after apply) 2026-02-28 00:02:32.383230 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383234 | orchestrator | + name = "testbed" 2026-02-28 00:02:32.383238 | orchestrator | + private_key = (sensitive value) 2026-02-28 00:02:32.383242 | orchestrator | + public_key = (known after apply) 2026-02-28 00:02:32.383245 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383249 | orchestrator | + user_id = (known after apply) 2026-02-28 00:02:32.383253 | orchestrator | } 2026-02-28 00:02:32.383256 | orchestrator | 2026-02-28 00:02:32.383260 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-28 00:02:32.383264 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383272 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383276 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383280 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383283 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383290 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383294 | orchestrator | } 2026-02-28 00:02:32.383299 | orchestrator | 2026-02-28 00:02:32.383303 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-28 00:02:32.383307 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383311 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383314 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383318 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383322 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383326 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383329 | orchestrator | } 2026-02-28 00:02:32.383365 | orchestrator | 2026-02-28 00:02:32.383371 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-28 00:02:32.383374 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383378 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383382 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383385 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383389 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383393 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383397 | orchestrator | } 2026-02-28 00:02:32.383440 | orchestrator | 2026-02-28 00:02:32.383446 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-28 00:02:32.383450 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383453 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383457 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383461 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383465 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383468 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383472 | orchestrator | } 2026-02-28 00:02:32.383477 | orchestrator | 2026-02-28 00:02:32.383481 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-28 00:02:32.383485 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383489 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383492 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383496 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383500 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383503 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383507 | orchestrator | } 2026-02-28 00:02:32.383512 | orchestrator | 2026-02-28 00:02:32.383516 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-28 00:02:32.383520 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383524 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383527 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383531 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383535 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383539 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383542 | orchestrator | } 2026-02-28 00:02:32.383568 | orchestrator | 2026-02-28 00:02:32.383572 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-28 00:02:32.383576 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383580 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383584 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383587 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383591 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383599 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383602 | orchestrator | } 2026-02-28 00:02:32.383634 | orchestrator | 2026-02-28 00:02:32.383639 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-28 00:02:32.383642 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383646 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383650 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383654 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383657 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383661 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383665 | orchestrator | } 2026-02-28 00:02:32.383699 | orchestrator | 2026-02-28 00:02:32.383704 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-28 00:02:32.383708 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-28 00:02:32.383712 | orchestrator | + device = (known after apply) 2026-02-28 00:02:32.383716 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383719 | orchestrator | + instance_id = (known after apply) 2026-02-28 00:02:32.383723 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383727 | orchestrator | + volume_id = (known after apply) 2026-02-28 00:02:32.383731 | orchestrator | } 2026-02-28 00:02:32.383804 | orchestrator | 2026-02-28 00:02:32.383810 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-28 00:02:32.383814 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-28 00:02:32.383818 | orchestrator | + fixed_ip = (known after apply) 2026-02-28 00:02:32.383822 | orchestrator | + floating_ip = (known after apply) 2026-02-28 00:02:32.383826 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383829 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:32.383833 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383837 | orchestrator | } 2026-02-28 00:02:32.383890 | orchestrator | 2026-02-28 00:02:32.383895 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-28 00:02:32.383899 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-28 00:02:32.383902 | orchestrator | + address = (known after apply) 2026-02-28 00:02:32.383906 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.383913 | orchestrator | + dns_domain = (known after apply) 2026-02-28 00:02:32.383917 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.383920 | orchestrator | + fixed_ip = (known after apply) 2026-02-28 00:02:32.383924 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.383928 | orchestrator | + pool = "public" 2026-02-28 00:02:32.383932 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:32.383935 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.383939 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.383943 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.383947 | orchestrator | } 2026-02-28 00:02:32.384275 | orchestrator | 2026-02-28 00:02:32.384286 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-28 00:02:32.384291 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-28 00:02:32.384297 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.384301 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.384305 | orchestrator | + availability_zone_hints = [ 2026-02-28 00:02:32.384309 | orchestrator | + "nova", 2026-02-28 00:02:32.384313 | orchestrator | ] 2026-02-28 00:02:32.384316 | orchestrator | + dns_domain = (known after apply) 2026-02-28 00:02:32.384320 | orchestrator | + external = (known after apply) 2026-02-28 00:02:32.384324 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.384328 | orchestrator | + mtu = (known after apply) 2026-02-28 00:02:32.384332 | orchestrator | + name = "net-testbed-management" 2026-02-28 00:02:32.384335 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.384344 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.384348 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.384351 | orchestrator | + shared = (known after apply) 2026-02-28 00:02:32.384355 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.384359 | orchestrator | + transparent_vlan = (known after apply) 2026-02-28 00:02:32.384363 | orchestrator | 2026-02-28 00:02:32.384367 | orchestrator | + segments (known after apply) 2026-02-28 00:02:32.384370 | orchestrator | } 2026-02-28 00:02:32.384491 | orchestrator | 2026-02-28 00:02:32.384497 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-28 00:02:32.384501 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-28 00:02:32.384504 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.384508 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.384512 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.384516 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.384520 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.384523 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.384527 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.384531 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.384535 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.384539 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.384542 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.384546 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.384550 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.384554 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.384557 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.384561 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.384565 | orchestrator | 2026-02-28 00:02:32.384569 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.384572 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.384576 | orchestrator | } 2026-02-28 00:02:32.384580 | orchestrator | 2026-02-28 00:02:32.384584 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.384588 | orchestrator | 2026-02-28 00:02:32.384592 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.384596 | orchestrator | + ip_address = "192.168.16.5" 2026-02-28 00:02:32.384599 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.384603 | orchestrator | } 2026-02-28 00:02:32.384607 | orchestrator | } 2026-02-28 00:02:32.385011 | orchestrator | 2026-02-28 00:02:32.385020 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-28 00:02:32.385024 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:32.385028 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.385032 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.385035 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.385039 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.385043 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.385046 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.385050 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.385054 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.385058 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.385062 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.385066 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.385069 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.385080 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.385084 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.385092 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.385096 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.385099 | orchestrator | 2026-02-28 00:02:32.385103 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385107 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:32.385111 | orchestrator | } 2026-02-28 00:02:32.385115 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385118 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.385122 | orchestrator | } 2026-02-28 00:02:32.385126 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385129 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:32.385133 | orchestrator | } 2026-02-28 00:02:32.385137 | orchestrator | 2026-02-28 00:02:32.385141 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.385144 | orchestrator | 2026-02-28 00:02:32.385148 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.385152 | orchestrator | + ip_address = "192.168.16.10" 2026-02-28 00:02:32.385156 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.385159 | orchestrator | } 2026-02-28 00:02:32.385163 | orchestrator | } 2026-02-28 00:02:32.385169 | orchestrator | 2026-02-28 00:02:32.385173 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-28 00:02:32.385176 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:32.385184 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.385188 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.385192 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.385195 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.385199 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.385203 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.385207 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.385210 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.385214 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.385218 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.385221 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.385225 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.385229 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.385233 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.385236 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.385240 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.385244 | orchestrator | 2026-02-28 00:02:32.385247 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385251 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:32.385255 | orchestrator | } 2026-02-28 00:02:32.385259 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385262 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.385266 | orchestrator | } 2026-02-28 00:02:32.385270 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385274 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:32.385277 | orchestrator | } 2026-02-28 00:02:32.385281 | orchestrator | 2026-02-28 00:02:32.385285 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.385288 | orchestrator | 2026-02-28 00:02:32.385292 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.385296 | orchestrator | + ip_address = "192.168.16.11" 2026-02-28 00:02:32.385300 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.385304 | orchestrator | } 2026-02-28 00:02:32.385307 | orchestrator | } 2026-02-28 00:02:32.385313 | orchestrator | 2026-02-28 00:02:32.385317 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-28 00:02:32.385320 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:32.385324 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.385328 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.385332 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.385336 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.385342 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.385346 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.385350 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.385354 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.385357 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.385361 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.385365 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.385369 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.385372 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.385376 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.385380 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.385383 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.385387 | orchestrator | 2026-02-28 00:02:32.385391 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385395 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:32.385398 | orchestrator | } 2026-02-28 00:02:32.385402 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385406 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.385410 | orchestrator | } 2026-02-28 00:02:32.385413 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.385417 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:32.385421 | orchestrator | } 2026-02-28 00:02:32.385424 | orchestrator | 2026-02-28 00:02:32.385428 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.385432 | orchestrator | 2026-02-28 00:02:32.385436 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.385439 | orchestrator | + ip_address = "192.168.16.12" 2026-02-28 00:02:32.385443 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.385447 | orchestrator | } 2026-02-28 00:02:32.385451 | orchestrator | } 2026-02-28 00:02:32.386081 | orchestrator | 2026-02-28 00:02:32.386099 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-28 00:02:32.386104 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:32.386108 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.386113 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.386117 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.386121 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.386124 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.386128 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.386132 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.386136 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.386140 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.386143 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.386147 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.386151 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.386155 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.386159 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.386165 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.386172 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.386178 | orchestrator | 2026-02-28 00:02:32.386184 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386191 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:32.386197 | orchestrator | } 2026-02-28 00:02:32.386203 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386210 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.386216 | orchestrator | } 2026-02-28 00:02:32.386222 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386228 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:32.386234 | orchestrator | } 2026-02-28 00:02:32.386240 | orchestrator | 2026-02-28 00:02:32.386255 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.386261 | orchestrator | 2026-02-28 00:02:32.386267 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.386274 | orchestrator | + ip_address = "192.168.16.13" 2026-02-28 00:02:32.386280 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.386286 | orchestrator | } 2026-02-28 00:02:32.386292 | orchestrator | } 2026-02-28 00:02:32.386302 | orchestrator | 2026-02-28 00:02:32.386308 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-28 00:02:32.386315 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:32.386321 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.386327 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.386333 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.386339 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.386345 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.386352 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.386358 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.386364 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.386376 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.386383 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.386389 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.386395 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.386401 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.386408 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.386414 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.386420 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.386428 | orchestrator | 2026-02-28 00:02:32.386434 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386448 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:32.386455 | orchestrator | } 2026-02-28 00:02:32.386461 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386467 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.386473 | orchestrator | } 2026-02-28 00:02:32.386479 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386485 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:32.386491 | orchestrator | } 2026-02-28 00:02:32.386497 | orchestrator | 2026-02-28 00:02:32.386504 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.386510 | orchestrator | 2026-02-28 00:02:32.386516 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.386522 | orchestrator | + ip_address = "192.168.16.14" 2026-02-28 00:02:32.386528 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.386534 | orchestrator | } 2026-02-28 00:02:32.386540 | orchestrator | } 2026-02-28 00:02:32.386549 | orchestrator | 2026-02-28 00:02:32.386556 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-28 00:02:32.386562 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-28 00:02:32.386568 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.386574 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-28 00:02:32.386580 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-28 00:02:32.386586 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.386593 | orchestrator | + device_id = (known after apply) 2026-02-28 00:02:32.386599 | orchestrator | + device_owner = (known after apply) 2026-02-28 00:02:32.386605 | orchestrator | + dns_assignment = (known after apply) 2026-02-28 00:02:32.386611 | orchestrator | + dns_name = (known after apply) 2026-02-28 00:02:32.386618 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.386624 | orchestrator | + mac_address = (known after apply) 2026-02-28 00:02:32.386630 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.386636 | orchestrator | + port_security_enabled = (known after apply) 2026-02-28 00:02:32.386642 | orchestrator | + qos_policy_id = (known after apply) 2026-02-28 00:02:32.386654 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.386660 | orchestrator | + security_group_ids = (known after apply) 2026-02-28 00:02:32.386666 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.386672 | orchestrator | 2026-02-28 00:02:32.386678 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386685 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-28 00:02:32.386691 | orchestrator | } 2026-02-28 00:02:32.386697 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386703 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-28 00:02:32.386709 | orchestrator | } 2026-02-28 00:02:32.386715 | orchestrator | + allowed_address_pairs { 2026-02-28 00:02:32.386721 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-28 00:02:32.386728 | orchestrator | } 2026-02-28 00:02:32.386734 | orchestrator | 2026-02-28 00:02:32.386740 | orchestrator | + binding (known after apply) 2026-02-28 00:02:32.386746 | orchestrator | 2026-02-28 00:02:32.386752 | orchestrator | + fixed_ip { 2026-02-28 00:02:32.386759 | orchestrator | + ip_address = "192.168.16.15" 2026-02-28 00:02:32.386765 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.386771 | orchestrator | } 2026-02-28 00:02:32.386777 | orchestrator | } 2026-02-28 00:02:32.386783 | orchestrator | 2026-02-28 00:02:32.386812 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-28 00:02:32.386819 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-28 00:02:32.386825 | orchestrator | + force_destroy = false 2026-02-28 00:02:32.386832 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.386838 | orchestrator | + port_id = (known after apply) 2026-02-28 00:02:32.386844 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.386850 | orchestrator | + router_id = (known after apply) 2026-02-28 00:02:32.386856 | orchestrator | + subnet_id = (known after apply) 2026-02-28 00:02:32.386862 | orchestrator | } 2026-02-28 00:02:32.386869 | orchestrator | 2026-02-28 00:02:32.386875 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-28 00:02:32.386881 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-28 00:02:32.386887 | orchestrator | + admin_state_up = (known after apply) 2026-02-28 00:02:32.386893 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.386899 | orchestrator | + availability_zone_hints = [ 2026-02-28 00:02:32.386906 | orchestrator | + "nova", 2026-02-28 00:02:32.386912 | orchestrator | ] 2026-02-28 00:02:32.386918 | orchestrator | + distributed = (known after apply) 2026-02-28 00:02:32.386924 | orchestrator | + enable_snat = (known after apply) 2026-02-28 00:02:32.386930 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-28 00:02:32.386937 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-28 00:02:32.386943 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.386949 | orchestrator | + name = "testbed" 2026-02-28 00:02:32.386955 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.386961 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.386968 | orchestrator | 2026-02-28 00:02:32.386974 | orchestrator | + external_fixed_ip (known after apply) 2026-02-28 00:02:32.386980 | orchestrator | } 2026-02-28 00:02:32.386986 | orchestrator | 2026-02-28 00:02:32.386992 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-28 00:02:32.386999 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-28 00:02:32.387005 | orchestrator | + description = "ssh" 2026-02-28 00:02:32.387011 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387018 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387024 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387030 | orchestrator | + port_range_max = 22 2026-02-28 00:02:32.387036 | orchestrator | + port_range_min = 22 2026-02-28 00:02:32.387042 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:32.387048 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387059 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387065 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387071 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387077 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387083 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387090 | orchestrator | } 2026-02-28 00:02:32.387099 | orchestrator | 2026-02-28 00:02:32.387105 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-28 00:02:32.387112 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-28 00:02:32.387118 | orchestrator | + description = "wireguard" 2026-02-28 00:02:32.387124 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387130 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387136 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387142 | orchestrator | + port_range_max = 51820 2026-02-28 00:02:32.387148 | orchestrator | + port_range_min = 51820 2026-02-28 00:02:32.387155 | orchestrator | + protocol = "udp" 2026-02-28 00:02:32.387161 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387167 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387173 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387179 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387185 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387191 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387197 | orchestrator | } 2026-02-28 00:02:32.387204 | orchestrator | 2026-02-28 00:02:32.387210 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-28 00:02:32.387216 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-28 00:02:32.387226 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387233 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387239 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387245 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:32.387251 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387257 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387263 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387269 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-28 00:02:32.387275 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387281 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387288 | orchestrator | } 2026-02-28 00:02:32.387294 | orchestrator | 2026-02-28 00:02:32.387300 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-28 00:02:32.387306 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-28 00:02:32.387312 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387318 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387325 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387331 | orchestrator | + protocol = "udp" 2026-02-28 00:02:32.387337 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387343 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387349 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387355 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-28 00:02:32.387361 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387367 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387374 | orchestrator | } 2026-02-28 00:02:32.387380 | orchestrator | 2026-02-28 00:02:32.387386 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-28 00:02:32.387398 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-28 00:02:32.387404 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387410 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387416 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387423 | orchestrator | + protocol = "icmp" 2026-02-28 00:02:32.387429 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387435 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387441 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387447 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387453 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387459 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387465 | orchestrator | } 2026-02-28 00:02:32.387471 | orchestrator | 2026-02-28 00:02:32.387477 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-28 00:02:32.387484 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-28 00:02:32.387490 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387496 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387502 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387508 | orchestrator | + protocol = "tcp" 2026-02-28 00:02:32.387514 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387521 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387527 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387533 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387539 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387545 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387551 | orchestrator | } 2026-02-28 00:02:32.387557 | orchestrator | 2026-02-28 00:02:32.387564 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-28 00:02:32.387570 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-28 00:02:32.387576 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387582 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387588 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387594 | orchestrator | + protocol = "udp" 2026-02-28 00:02:32.387600 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387606 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387613 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387619 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387633 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387639 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387646 | orchestrator | } 2026-02-28 00:02:32.387652 | orchestrator | 2026-02-28 00:02:32.387658 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-28 00:02:32.387664 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-28 00:02:32.387670 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387676 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387682 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387688 | orchestrator | + protocol = "icmp" 2026-02-28 00:02:32.387694 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387700 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387707 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387713 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387719 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387725 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387736 | orchestrator | } 2026-02-28 00:02:32.387742 | orchestrator | 2026-02-28 00:02:32.387748 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-28 00:02:32.387754 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-28 00:02:32.387761 | orchestrator | + description = "vrrp" 2026-02-28 00:02:32.387767 | orchestrator | + direction = "ingress" 2026-02-28 00:02:32.387773 | orchestrator | + ethertype = "IPv4" 2026-02-28 00:02:32.387779 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387796 | orchestrator | + protocol = "112" 2026-02-28 00:02:32.387803 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387809 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-28 00:02:32.387815 | orchestrator | + remote_group_id = (known after apply) 2026-02-28 00:02:32.387821 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-28 00:02:32.387827 | orchestrator | + security_group_id = (known after apply) 2026-02-28 00:02:32.387834 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387840 | orchestrator | } 2026-02-28 00:02:32.387846 | orchestrator | 2026-02-28 00:02:32.387852 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-28 00:02:32.387858 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-28 00:02:32.387864 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.387870 | orchestrator | + description = "management security group" 2026-02-28 00:02:32.387877 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387883 | orchestrator | + name = "testbed-management" 2026-02-28 00:02:32.387889 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387895 | orchestrator | + stateful = (known after apply) 2026-02-28 00:02:32.387901 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387907 | orchestrator | } 2026-02-28 00:02:32.387913 | orchestrator | 2026-02-28 00:02:32.387920 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-28 00:02:32.387926 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-28 00:02:32.387932 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.387938 | orchestrator | + description = "node security group" 2026-02-28 00:02:32.387944 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.387950 | orchestrator | + name = "testbed-node" 2026-02-28 00:02:32.387956 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.387962 | orchestrator | + stateful = (known after apply) 2026-02-28 00:02:32.387968 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.387974 | orchestrator | } 2026-02-28 00:02:32.387981 | orchestrator | 2026-02-28 00:02:32.387987 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-28 00:02:32.387993 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-28 00:02:32.387999 | orchestrator | + all_tags = (known after apply) 2026-02-28 00:02:32.388005 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-28 00:02:32.388012 | orchestrator | + dns_nameservers = [ 2026-02-28 00:02:32.388018 | orchestrator | + "8.8.8.8", 2026-02-28 00:02:32.388024 | orchestrator | + "9.9.9.9", 2026-02-28 00:02:32.388030 | orchestrator | ] 2026-02-28 00:02:32.388037 | orchestrator | + enable_dhcp = true 2026-02-28 00:02:32.388043 | orchestrator | + gateway_ip = (known after apply) 2026-02-28 00:02:32.388052 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.388059 | orchestrator | + ip_version = 4 2026-02-28 00:02:32.388065 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-28 00:02:32.388071 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-28 00:02:32.388077 | orchestrator | + name = "subnet-testbed-management" 2026-02-28 00:02:32.388083 | orchestrator | + network_id = (known after apply) 2026-02-28 00:02:32.388090 | orchestrator | + no_gateway = false 2026-02-28 00:02:32.388096 | orchestrator | + region = (known after apply) 2026-02-28 00:02:32.388102 | orchestrator | + service_types = (known after apply) 2026-02-28 00:02:32.388113 | orchestrator | + tenant_id = (known after apply) 2026-02-28 00:02:32.388119 | orchestrator | 2026-02-28 00:02:32.388125 | orchestrator | + allocation_pool { 2026-02-28 00:02:32.388131 | orchestrator | + end = "192.168.31.250" 2026-02-28 00:02:32.388137 | orchestrator | + start = "192.168.31.200" 2026-02-28 00:02:32.388144 | orchestrator | } 2026-02-28 00:02:32.388150 | orchestrator | } 2026-02-28 00:02:32.388156 | orchestrator | 2026-02-28 00:02:32.388162 | orchestrator | # terraform_data.image will be created 2026-02-28 00:02:32.388168 | orchestrator | + resource "terraform_data" "image" { 2026-02-28 00:02:32.388174 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.388181 | orchestrator | + input = "Ubuntu 24.04" 2026-02-28 00:02:32.388187 | orchestrator | + output = (known after apply) 2026-02-28 00:02:32.388193 | orchestrator | } 2026-02-28 00:02:32.388199 | orchestrator | 2026-02-28 00:02:32.388205 | orchestrator | # terraform_data.image_node will be created 2026-02-28 00:02:32.388211 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-28 00:02:32.388218 | orchestrator | + id = (known after apply) 2026-02-28 00:02:32.388224 | orchestrator | + input = "Ubuntu 24.04" 2026-02-28 00:02:32.388230 | orchestrator | + output = (known after apply) 2026-02-28 00:02:32.388236 | orchestrator | } 2026-02-28 00:02:32.388242 | orchestrator | 2026-02-28 00:02:32.388248 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-28 00:02:32.388254 | orchestrator | 2026-02-28 00:02:32.388260 | orchestrator | Changes to Outputs: 2026-02-28 00:02:32.388267 | orchestrator | + manager_address = (sensitive value) 2026-02-28 00:02:32.388273 | orchestrator | + private_key = (sensitive value) 2026-02-28 00:02:32.576259 | orchestrator | terraform_data.image: Creating... 2026-02-28 00:02:32.576516 | orchestrator | terraform_data.image: Creation complete after 0s [id=12a700c3-d35a-13d0-7927-55fcb4ef41b5] 2026-02-28 00:02:33.591376 | orchestrator | terraform_data.image_node: Creating... 2026-02-28 00:02:33.591447 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=a1da847b-0517-8577-4fe2-796d6f000120] 2026-02-28 00:02:33.608437 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-28 00:02:33.608515 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-28 00:02:33.608525 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-28 00:02:33.614949 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-28 00:02:33.615959 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-28 00:02:33.617010 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-28 00:02:33.618247 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-28 00:02:33.618856 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-28 00:02:33.623167 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-28 00:02:33.626125 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-28 00:02:34.091783 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-28 00:02:34.099243 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-28 00:02:34.101055 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-28 00:02:34.108126 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-28 00:02:34.182190 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-28 00:02:34.185621 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-28 00:02:34.900554 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=c8999de0-c134-4073-a55a-c57eefc49c4a] 2026-02-28 00:02:36.188440 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-28 00:02:37.287342 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=74c1e4c5-3021-4968-89ab-b5ccd24df7f0] 2026-02-28 00:02:37.293558 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-28 00:02:37.323942 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=5ed1e25e-e858-43bf-b647-15f2d5789185] 2026-02-28 00:02:37.329429 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-28 00:02:37.336221 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4f48d2b3-724f-4801-86d9-3346f8b02ca0] 2026-02-28 00:02:37.338532 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=dee7b1c7-019b-4aff-807a-ca0205e3afa9] 2026-02-28 00:02:37.351072 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=6e2886f6-2d16-4655-86a5-4832cbb6b1fd] 2026-02-28 00:02:37.352093 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-28 00:02:37.354083 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-28 00:02:37.358649 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-28 00:02:37.388997 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=e0efcc73-9d13-408e-8d84-f67f704dc102] 2026-02-28 00:02:37.393358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-28 00:02:37.410615 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=2f6d4770-6b80-415d-bad7-939321dd0d14] 2026-02-28 00:02:37.430175 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-28 00:02:37.434719 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=5fa97aac87d5e70c281afbcdb61bb2511c8cd075] 2026-02-28 00:02:37.444105 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-28 00:02:37.446284 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=d23b355c-9115-4a32-83d9-d27c9bfa2660] 2026-02-28 00:02:37.453635 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=38dbb877-01c9-4d16-8c09-dabf832ed02d] 2026-02-28 00:02:37.454383 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=8953e2bf2cfe7a981e97062a2cfe4e2171fa1829] 2026-02-28 00:02:37.455367 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-28 00:02:38.328405 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=0524a285-b1bf-4737-b91a-ca6b10871a2b] 2026-02-28 00:02:38.410147 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=50cfd9d2-ae91-4770-acd3-d89908a780f0] 2026-02-28 00:02:38.419805 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-28 00:02:40.776612 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=b6c0fd8e-4cd9-4c06-a89e-d9062260a288] 2026-02-28 00:02:40.795677 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=762d8a21-5374-4a80-ba11-21b5200d3acc] 2026-02-28 00:02:40.822238 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=48b55e47-7594-497c-8e74-39b6ba356462] 2026-02-28 00:02:40.868729 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=31d0474f-d148-4fa7-8e21-0caa01fecd6c] 2026-02-28 00:02:40.891419 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=c4377a96-07c8-49d6-8f0b-9a269b92cb14] 2026-02-28 00:02:40.892563 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=c9d3095c-7509-4fbe-ae74-63c4ac873621] 2026-02-28 00:02:44.153368 | orchestrator | openstack_networking_router_v2.router: Creation complete after 6s [id=14d0756e-e6f2-4b73-bd05-529e6a196b9b] 2026-02-28 00:02:44.160590 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-28 00:02:44.161491 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-28 00:02:44.162380 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-28 00:02:44.454763 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=62327635-81a0-4fc2-a1f5-b12a42d6d076] 2026-02-28 00:02:44.470149 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-28 00:02:44.476746 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-28 00:02:44.479115 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-28 00:02:44.479347 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-28 00:02:44.479427 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-28 00:02:44.481433 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-28 00:02:44.488181 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-28 00:02:44.488294 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=0d084582-7435-4bf0-af21-cf73261ab86a] 2026-02-28 00:02:44.491859 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-28 00:02:44.501174 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-28 00:02:45.030146 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=33155034-547d-47d3-a073-6aee5184cda2] 2026-02-28 00:02:45.038895 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-28 00:02:45.305411 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=4afd8dd1-d841-4279-8554-b75b9376cb5a] 2026-02-28 00:02:45.313933 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-28 00:02:45.328911 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=bafe0bc4-0f07-424e-948e-b29c0a75ea5f] 2026-02-28 00:02:45.339487 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-28 00:02:45.479019 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=2e918150-a06b-4a79-bf0e-8d2673b605a0] 2026-02-28 00:02:45.491056 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-28 00:02:45.491216 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=c0f5ae66-1336-46eb-8899-4c2aeee2506c] 2026-02-28 00:02:45.495868 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-28 00:02:45.671856 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=6a8f6837-445a-4f5a-8a22-2c480f94c786] 2026-02-28 00:02:45.676690 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=b25bf40a-83a1-411b-a30b-67c80781d7cd] 2026-02-28 00:02:45.679148 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-28 00:02:45.682925 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-28 00:02:45.720602 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=367355b9-558f-496d-9ace-bd311c37dc3d] 2026-02-28 00:02:45.779745 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=4aa6fc65-c377-4289-881c-342eca4f7700] 2026-02-28 00:02:45.921702 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=e00cdb88-a92d-4868-b714-089ef35f888c] 2026-02-28 00:02:45.928289 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=a4274f56-9410-4243-a458-861c616affec] 2026-02-28 00:02:46.118013 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=7401dc4a-b325-4915-a9a3-eac4f11f3548] 2026-02-28 00:02:46.120920 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=c6cbc74b-ed32-4205-9e84-3cf0fc7d1115] 2026-02-28 00:02:46.125257 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=9c60a6bb-204b-4726-bc86-6be232e5d4fc] 2026-02-28 00:02:46.369048 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=52083e30-e116-4623-b10f-eb236868cf61] 2026-02-28 00:02:47.149471 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=249d2b38-ce39-4ef9-8987-86000f017b89] 2026-02-28 00:02:49.474147 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=9902bc81-61f1-4fa8-b2b5-353ec1047382] 2026-02-28 00:02:49.498944 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-28 00:02:49.507682 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-28 00:02:49.508535 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-28 00:02:49.514266 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-28 00:02:49.519119 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-28 00:02:49.526452 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-28 00:02:49.532687 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-28 00:02:51.694311 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=23415a43-7adb-4cf3-b866-4512e214061c] 2026-02-28 00:02:51.700493 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-28 00:02:51.709114 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-28 00:02:51.709314 | orchestrator | local_file.inventory: Creating... 2026-02-28 00:02:51.712927 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=34c24b3adbb2ec67c41840d5fb1cbedc7feb1ff8] 2026-02-28 00:02:51.713684 | orchestrator | local_file.inventory: Creation complete after 0s [id=579ffe0bf2c33b5e7863571e0e53fb396ad3e0ba] 2026-02-28 00:02:52.569872 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=23415a43-7adb-4cf3-b866-4512e214061c] 2026-02-28 00:02:59.515243 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-28 00:02:59.515377 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-28 00:02:59.517443 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-28 00:02:59.519857 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-28 00:02:59.531312 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-28 00:02:59.533672 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-28 00:03:09.524222 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-28 00:03:09.524304 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-28 00:03:09.524325 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-28 00:03:09.524335 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-28 00:03:09.531440 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-28 00:03:09.534692 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-28 00:03:19.532428 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-28 00:03:19.532536 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-28 00:03:19.532564 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-28 00:03:19.532577 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-28 00:03:19.532589 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-28 00:03:19.535699 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-28 00:03:20.429618 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=1493c0d5-aa00-49d9-b201-91ef7f53419f] 2026-02-28 00:03:29.541441 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-28 00:03:29.541533 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-02-28 00:03:29.541544 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-28 00:03:29.541550 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-28 00:03:29.541582 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-02-28 00:03:30.571438 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=471466ce-4eb3-4e06-a749-01f4a23ca3b4] 2026-02-28 00:03:30.624509 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=177b8d1f-a590-4cc7-b6c9-e70bb026deeb] 2026-02-28 00:03:30.710010 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=269744c9-7444-436e-ad47-341990f824a0] 2026-02-28 00:03:39.546211 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-02-28 00:03:39.546340 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-02-28 00:03:40.579742 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=3aa77aa8-60e3-4d79-9796-54e3e5b37659] 2026-02-28 00:03:41.334505 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=28fc374d-0a42-45fa-9e03-87286520c0e1] 2026-02-28 00:03:41.357715 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-28 00:03:41.373449 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-28 00:03:41.375195 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-28 00:03:41.379844 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-28 00:03:41.381932 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=531307861693072271] 2026-02-28 00:03:41.384976 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-28 00:03:41.386145 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-28 00:03:41.388148 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-28 00:03:41.388457 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-28 00:03:41.388527 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-28 00:03:41.405433 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-28 00:03:41.434655 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-28 00:03:44.810877 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=269744c9-7444-436e-ad47-341990f824a0/dee7b1c7-019b-4aff-807a-ca0205e3afa9] 2026-02-28 00:03:44.833492 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=28fc374d-0a42-45fa-9e03-87286520c0e1/4f48d2b3-724f-4801-86d9-3346f8b02ca0] 2026-02-28 00:03:44.834287 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=177b8d1f-a590-4cc7-b6c9-e70bb026deeb/2f6d4770-6b80-415d-bad7-939321dd0d14] 2026-02-28 00:03:44.834347 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=177b8d1f-a590-4cc7-b6c9-e70bb026deeb/74c1e4c5-3021-4968-89ab-b5ccd24df7f0] 2026-02-28 00:03:44.846400 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=269744c9-7444-436e-ad47-341990f824a0/e0efcc73-9d13-408e-8d84-f67f704dc102] 2026-02-28 00:03:44.863980 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=28fc374d-0a42-45fa-9e03-87286520c0e1/6e2886f6-2d16-4655-86a5-4832cbb6b1fd] 2026-02-28 00:03:50.940421 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=177b8d1f-a590-4cc7-b6c9-e70bb026deeb/d23b355c-9115-4a32-83d9-d27c9bfa2660] 2026-02-28 00:03:50.946687 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=269744c9-7444-436e-ad47-341990f824a0/5ed1e25e-e858-43bf-b647-15f2d5789185] 2026-02-28 00:03:50.968442 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=28fc374d-0a42-45fa-9e03-87286520c0e1/38dbb877-01c9-4d16-8c09-dabf832ed02d] 2026-02-28 00:03:51.438294 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-28 00:04:01.439246 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-28 00:04:01.869661 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=d17f759c-2ed0-4afc-a7f4-7af26be5ba32] 2026-02-28 00:04:01.934904 | orchestrator | 2026-02-28 00:04:01.935034 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-28 00:04:01.935060 | orchestrator | 2026-02-28 00:04:01.935081 | orchestrator | Outputs: 2026-02-28 00:04:01.935102 | orchestrator | 2026-02-28 00:04:01.935124 | orchestrator | manager_address = 2026-02-28 00:04:01.935141 | orchestrator | private_key = 2026-02-28 00:04:02.018708 | orchestrator | ok: Runtime: 0:01:34.476967 2026-02-28 00:04:02.045372 | 2026-02-28 00:04:02.045490 | TASK [Create infrastructure (stable)] 2026-02-28 00:04:02.589696 | orchestrator | skipping: Conditional result was False 2026-02-28 00:04:02.605845 | 2026-02-28 00:04:02.605979 | TASK [Fetch manager address] 2026-02-28 00:04:03.029796 | orchestrator | ok 2026-02-28 00:04:03.039874 | 2026-02-28 00:04:03.039986 | TASK [Set manager_host address] 2026-02-28 00:04:03.123652 | orchestrator | ok 2026-02-28 00:04:03.131265 | 2026-02-28 00:04:03.131397 | LOOP [Update ansible collections] 2026-02-28 00:04:04.053181 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:04:04.053454 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-28 00:04:04.053493 | orchestrator | Starting galaxy collection install process 2026-02-28 00:04:04.053518 | orchestrator | Process install dependency map 2026-02-28 00:04:04.053540 | orchestrator | Starting collection install process 2026-02-28 00:04:04.053561 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-02-28 00:04:04.053585 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-02-28 00:04:04.053615 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-28 00:04:04.053673 | orchestrator | ok: Item: commons Runtime: 0:00:00.613938 2026-02-28 00:04:05.105467 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-28 00:04:05.105610 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:04:05.105664 | orchestrator | Starting galaxy collection install process 2026-02-28 00:04:05.105705 | orchestrator | Process install dependency map 2026-02-28 00:04:05.105744 | orchestrator | Starting collection install process 2026-02-28 00:04:05.105779 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-02-28 00:04:05.105811 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-02-28 00:04:05.105842 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-28 00:04:05.105892 | orchestrator | ok: Item: services Runtime: 0:00:00.757530 2026-02-28 00:04:05.132504 | 2026-02-28 00:04:05.132666 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-28 00:04:18.692804 | orchestrator | ok 2026-02-28 00:04:18.704296 | 2026-02-28 00:04:18.704496 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-28 00:05:18.751069 | orchestrator | ok 2026-02-28 00:05:18.761287 | 2026-02-28 00:05:18.761434 | TASK [Fetch manager ssh hostkey] 2026-02-28 00:05:20.339542 | orchestrator | Output suppressed because no_log was given 2026-02-28 00:05:20.357678 | 2026-02-28 00:05:20.357855 | TASK [Get ssh keypair from terraform environment] 2026-02-28 00:05:20.902272 | orchestrator | ok: Runtime: 0:00:00.007340 2026-02-28 00:05:20.917924 | 2026-02-28 00:05:20.918092 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-28 00:05:20.957246 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-28 00:05:20.968307 | 2026-02-28 00:05:20.968454 | TASK [Run manager part 0] 2026-02-28 00:05:22.017291 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:05:22.094433 | orchestrator | 2026-02-28 00:05:22.094496 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-28 00:05:22.094509 | orchestrator | 2026-02-28 00:05:22.094525 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-28 00:05:24.040251 | orchestrator | ok: [testbed-manager] 2026-02-28 00:05:24.040299 | orchestrator | 2026-02-28 00:05:24.040329 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-28 00:05:24.040343 | orchestrator | 2026-02-28 00:05:24.040355 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:05:26.065074 | orchestrator | ok: [testbed-manager] 2026-02-28 00:05:26.065120 | orchestrator | 2026-02-28 00:05:26.065127 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-28 00:05:26.828367 | orchestrator | ok: [testbed-manager] 2026-02-28 00:05:26.828417 | orchestrator | 2026-02-28 00:05:26.828425 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-28 00:05:26.886532 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:26.886576 | orchestrator | 2026-02-28 00:05:26.886586 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-28 00:05:26.918497 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:26.918537 | orchestrator | 2026-02-28 00:05:26.918546 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-28 00:05:26.956760 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:26.956814 | orchestrator | 2026-02-28 00:05:26.956821 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-28 00:05:26.989666 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:26.989724 | orchestrator | 2026-02-28 00:05:26.989767 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-28 00:05:27.025542 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:27.025589 | orchestrator | 2026-02-28 00:05:27.025597 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-28 00:05:27.054930 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:27.054975 | orchestrator | 2026-02-28 00:05:27.054982 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-28 00:05:27.086043 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:05:27.086087 | orchestrator | 2026-02-28 00:05:27.086094 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-28 00:05:27.946357 | orchestrator | changed: [testbed-manager] 2026-02-28 00:05:27.946429 | orchestrator | 2026-02-28 00:05:27.946441 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-28 00:08:21.573191 | orchestrator | changed: [testbed-manager] 2026-02-28 00:08:21.631248 | orchestrator | 2026-02-28 00:08:21.631315 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-28 00:09:43.538627 | orchestrator | changed: [testbed-manager] 2026-02-28 00:09:43.538739 | orchestrator | 2026-02-28 00:09:43.538751 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-28 00:10:05.019579 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:05.019642 | orchestrator | 2026-02-28 00:10:05.019655 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-28 00:10:14.849738 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:14.849809 | orchestrator | 2026-02-28 00:10:14.849826 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-28 00:10:14.898711 | orchestrator | ok: [testbed-manager] 2026-02-28 00:10:14.898806 | orchestrator | 2026-02-28 00:10:14.898824 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-28 00:10:15.658454 | orchestrator | ok: [testbed-manager] 2026-02-28 00:10:15.658493 | orchestrator | 2026-02-28 00:10:15.658503 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-28 00:10:16.378593 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:16.378841 | orchestrator | 2026-02-28 00:10:16.378863 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-28 00:10:22.127411 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:22.127532 | orchestrator | 2026-02-28 00:10:22.127591 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-28 00:10:28.389608 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:28.389710 | orchestrator | 2026-02-28 00:10:28.389732 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-28 00:10:31.413695 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:31.413737 | orchestrator | 2026-02-28 00:10:31.413745 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-28 00:10:32.972073 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:32.972163 | orchestrator | 2026-02-28 00:10:32.972180 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-28 00:10:34.024565 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-28 00:10:34.024608 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-28 00:10:34.024616 | orchestrator | 2026-02-28 00:10:34.024623 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-28 00:10:34.068052 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-28 00:10:34.068123 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-28 00:10:34.068134 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-28 00:10:34.068145 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-28 00:10:37.138212 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-28 00:10:37.138290 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-28 00:10:37.138302 | orchestrator | 2026-02-28 00:10:37.138312 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-28 00:10:37.685369 | orchestrator | changed: [testbed-manager] 2026-02-28 00:10:37.685454 | orchestrator | 2026-02-28 00:10:37.685470 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-28 00:13:01.327382 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-28 00:13:01.327537 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-28 00:13:01.327567 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-28 00:13:01.327588 | orchestrator | 2026-02-28 00:13:01.327629 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-28 00:13:03.586228 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-28 00:13:03.586298 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-28 00:13:03.586308 | orchestrator | 2026-02-28 00:13:03.586315 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-28 00:13:03.586323 | orchestrator | 2026-02-28 00:13:03.586330 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:13:04.918125 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:04.918280 | orchestrator | 2026-02-28 00:13:04.918296 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-28 00:13:04.954112 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:04.954399 | orchestrator | 2026-02-28 00:13:04.954472 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-28 00:13:05.018409 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:05.018514 | orchestrator | 2026-02-28 00:13:05.018522 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-28 00:13:06.262396 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:06.262614 | orchestrator | 2026-02-28 00:13:06.262632 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-28 00:13:07.006329 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:07.006393 | orchestrator | 2026-02-28 00:13:07.006402 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-28 00:13:08.370216 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-28 00:13:08.370289 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-28 00:13:08.370304 | orchestrator | 2026-02-28 00:13:08.370335 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-28 00:13:09.830433 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:09.830490 | orchestrator | 2026-02-28 00:13:09.830498 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-28 00:13:11.574839 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:13:11.574919 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-28 00:13:11.574933 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:13:11.574943 | orchestrator | 2026-02-28 00:13:11.574955 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-28 00:13:11.631812 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:11.631855 | orchestrator | 2026-02-28 00:13:11.631863 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-28 00:13:11.700230 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:11.700321 | orchestrator | 2026-02-28 00:13:11.700340 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-28 00:13:12.257662 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:12.257762 | orchestrator | 2026-02-28 00:13:12.257779 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-28 00:13:12.329681 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:12.329772 | orchestrator | 2026-02-28 00:13:12.329788 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-28 00:13:13.204326 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:13:13.204479 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:13.204507 | orchestrator | 2026-02-28 00:13:13.204529 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-28 00:13:13.239808 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:13.239876 | orchestrator | 2026-02-28 00:13:13.239891 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-28 00:13:13.268783 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:13.268867 | orchestrator | 2026-02-28 00:13:13.268882 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-28 00:13:13.311511 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:13.311618 | orchestrator | 2026-02-28 00:13:13.311649 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-28 00:13:13.395763 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:13.395871 | orchestrator | 2026-02-28 00:13:13.395899 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-28 00:13:14.123853 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:14.123939 | orchestrator | 2026-02-28 00:13:14.123955 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-28 00:13:14.123968 | orchestrator | 2026-02-28 00:13:14.123980 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:13:15.525072 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:15.525155 | orchestrator | 2026-02-28 00:13:15.525172 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-28 00:13:16.478160 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:16.478267 | orchestrator | 2026-02-28 00:13:16.478294 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:13:16.478314 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-28 00:13:16.478326 | orchestrator | 2026-02-28 00:13:16.846244 | orchestrator | ok: Runtime: 0:07:55.296284 2026-02-28 00:13:16.864927 | 2026-02-28 00:13:16.865073 | TASK [Point out that the log in on the manager is now possible] 2026-02-28 00:13:16.913502 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-28 00:13:16.923099 | 2026-02-28 00:13:16.923215 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-28 00:13:16.970800 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-28 00:13:16.981107 | 2026-02-28 00:13:16.981236 | TASK [Run manager part 1 + 2] 2026-02-28 00:13:17.898394 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-28 00:13:17.954085 | orchestrator | 2026-02-28 00:13:17.954158 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-28 00:13:17.954171 | orchestrator | 2026-02-28 00:13:17.954194 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:13:20.906771 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:20.906935 | orchestrator | 2026-02-28 00:13:20.906989 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-28 00:13:20.960114 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:20.960228 | orchestrator | 2026-02-28 00:13:20.960258 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-28 00:13:21.004527 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:21.004612 | orchestrator | 2026-02-28 00:13:21.004628 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:13:21.041754 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:21.041808 | orchestrator | 2026-02-28 00:13:21.041817 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:13:21.116841 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:21.116949 | orchestrator | 2026-02-28 00:13:21.116967 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:13:21.177112 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:21.177192 | orchestrator | 2026-02-28 00:13:21.177209 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:13:21.240145 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-28 00:13:21.240231 | orchestrator | 2026-02-28 00:13:21.240247 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:13:21.987871 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:21.987956 | orchestrator | 2026-02-28 00:13:21.987977 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:13:22.055595 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:22.055662 | orchestrator | 2026-02-28 00:13:22.055672 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:13:23.451703 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:23.451802 | orchestrator | 2026-02-28 00:13:23.451824 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:13:24.023272 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:24.023363 | orchestrator | 2026-02-28 00:13:24.023381 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:13:26.866869 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:26.866960 | orchestrator | 2026-02-28 00:13:26.866985 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:13:41.917555 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:41.917658 | orchestrator | 2026-02-28 00:13:41.917677 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-28 00:13:42.600884 | orchestrator | ok: [testbed-manager] 2026-02-28 00:13:42.600948 | orchestrator | 2026-02-28 00:13:42.600966 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-28 00:13:42.661889 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:42.661937 | orchestrator | 2026-02-28 00:13:42.661948 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-28 00:13:43.689868 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:43.689908 | orchestrator | 2026-02-28 00:13:43.689916 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-28 00:13:44.659146 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:44.659236 | orchestrator | 2026-02-28 00:13:44.659253 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-28 00:13:45.239216 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:45.239305 | orchestrator | 2026-02-28 00:13:45.239322 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-28 00:13:45.280421 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-28 00:13:45.280534 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-28 00:13:45.280549 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-28 00:13:45.280562 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-28 00:13:47.694214 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:47.694322 | orchestrator | 2026-02-28 00:13:47.694348 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-28 00:13:56.300268 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-28 00:13:56.300315 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-28 00:13:56.300324 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-28 00:13:56.300330 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-28 00:13:56.300340 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-28 00:13:56.300346 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-28 00:13:56.300352 | orchestrator | 2026-02-28 00:13:56.300358 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-28 00:13:57.330871 | orchestrator | changed: [testbed-manager] 2026-02-28 00:13:57.330941 | orchestrator | 2026-02-28 00:13:57.330957 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-28 00:13:57.372055 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:13:57.372125 | orchestrator | 2026-02-28 00:13:57.372140 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-28 00:14:02.185307 | orchestrator | changed: [testbed-manager] 2026-02-28 00:14:02.185361 | orchestrator | 2026-02-28 00:14:02.185372 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-28 00:14:02.230314 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:14:02.230425 | orchestrator | 2026-02-28 00:14:02.230443 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-28 00:15:41.946741 | orchestrator | changed: [testbed-manager] 2026-02-28 00:15:41.946847 | orchestrator | 2026-02-28 00:15:41.946867 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:15:43.085401 | orchestrator | ok: [testbed-manager] 2026-02-28 00:15:43.085494 | orchestrator | 2026-02-28 00:15:43.085512 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:15:43.085526 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-28 00:15:43.085538 | orchestrator | 2026-02-28 00:15:43.607202 | orchestrator | ok: Runtime: 0:02:25.859301 2026-02-28 00:15:43.616804 | 2026-02-28 00:15:43.616947 | TASK [Reboot manager] 2026-02-28 00:15:45.150475 | orchestrator | ok: Runtime: 0:00:00.957373 2026-02-28 00:15:45.159101 | 2026-02-28 00:15:45.159232 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-28 00:16:00.106175 | orchestrator | ok 2026-02-28 00:16:00.115503 | 2026-02-28 00:16:00.115644 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-28 00:17:00.160031 | orchestrator | ok 2026-02-28 00:17:00.169679 | 2026-02-28 00:17:00.169831 | TASK [Deploy manager + bootstrap nodes] 2026-02-28 00:17:02.674571 | orchestrator | 2026-02-28 00:17:02.675959 | orchestrator | # DEPLOY MANAGER 2026-02-28 00:17:02.676034 | orchestrator | 2026-02-28 00:17:02.676052 | orchestrator | + set -e 2026-02-28 00:17:02.676067 | orchestrator | + echo 2026-02-28 00:17:02.676082 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-28 00:17:02.676107 | orchestrator | + echo 2026-02-28 00:17:02.676158 | orchestrator | + cat /opt/manager-vars.sh 2026-02-28 00:17:02.678167 | orchestrator | export NUMBER_OF_NODES=6 2026-02-28 00:17:02.678199 | orchestrator | 2026-02-28 00:17:02.678213 | orchestrator | export CEPH_VERSION=reef 2026-02-28 00:17:02.678226 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-28 00:17:02.678239 | orchestrator | export MANAGER_VERSION=latest 2026-02-28 00:17:02.678262 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-28 00:17:02.678273 | orchestrator | 2026-02-28 00:17:02.678292 | orchestrator | export ARA=false 2026-02-28 00:17:02.678304 | orchestrator | export DEPLOY_MODE=manager 2026-02-28 00:17:02.678321 | orchestrator | export TEMPEST=true 2026-02-28 00:17:02.678333 | orchestrator | export IS_ZUUL=true 2026-02-28 00:17:02.678344 | orchestrator | 2026-02-28 00:17:02.678362 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:17:02.678374 | orchestrator | export EXTERNAL_API=false 2026-02-28 00:17:02.678385 | orchestrator | 2026-02-28 00:17:02.678451 | orchestrator | export IMAGE_USER=ubuntu 2026-02-28 00:17:02.678472 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-28 00:17:02.678484 | orchestrator | 2026-02-28 00:17:02.678495 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-28 00:17:02.678514 | orchestrator | 2026-02-28 00:17:02.678526 | orchestrator | + echo 2026-02-28 00:17:02.678540 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:17:02.679136 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:17:02.679158 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:17:02.679170 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:17:02.679182 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:17:02.679339 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:17:02.679354 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:17:02.679366 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:17:02.679453 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:17:02.679468 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:17:02.679479 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:17:02.679489 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:17:02.679500 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-28 00:17:02.679511 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-28 00:17:02.679522 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-28 00:17:02.679545 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-28 00:17:02.679561 | orchestrator | ++ export ARA=false 2026-02-28 00:17:02.679572 | orchestrator | ++ ARA=false 2026-02-28 00:17:02.679583 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:17:02.679594 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:17:02.679604 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:17:02.679615 | orchestrator | ++ TEMPEST=true 2026-02-28 00:17:02.679626 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:17:02.679637 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:17:02.679648 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:17:02.679752 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:17:02.679768 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:17:02.679780 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:17:02.679790 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:17:02.679801 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:17:02.679812 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:17:02.679823 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:17:02.679905 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:17:02.679919 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:17:02.679931 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-28 00:17:02.733595 | orchestrator | + docker version 2026-02-28 00:17:02.880157 | orchestrator | Client: Docker Engine - Community 2026-02-28 00:17:02.880271 | orchestrator | Version: 27.5.1 2026-02-28 00:17:02.880288 | orchestrator | API version: 1.47 2026-02-28 00:17:02.880303 | orchestrator | Go version: go1.22.11 2026-02-28 00:17:02.880313 | orchestrator | Git commit: 9f9e405 2026-02-28 00:17:02.880325 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-28 00:17:02.880337 | orchestrator | OS/Arch: linux/amd64 2026-02-28 00:17:02.880347 | orchestrator | Context: default 2026-02-28 00:17:02.880358 | orchestrator | 2026-02-28 00:17:02.880369 | orchestrator | Server: Docker Engine - Community 2026-02-28 00:17:02.880380 | orchestrator | Engine: 2026-02-28 00:17:02.880391 | orchestrator | Version: 27.5.1 2026-02-28 00:17:02.880403 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-28 00:17:02.880447 | orchestrator | Go version: go1.22.11 2026-02-28 00:17:02.880459 | orchestrator | Git commit: 4c9b3b0 2026-02-28 00:17:02.880470 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-28 00:17:02.880481 | orchestrator | OS/Arch: linux/amd64 2026-02-28 00:17:02.880491 | orchestrator | Experimental: false 2026-02-28 00:17:02.880502 | orchestrator | containerd: 2026-02-28 00:17:02.880512 | orchestrator | Version: v2.2.1 2026-02-28 00:17:02.880535 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-28 00:17:02.880547 | orchestrator | runc: 2026-02-28 00:17:02.880557 | orchestrator | Version: 1.3.4 2026-02-28 00:17:02.880568 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-28 00:17:02.880580 | orchestrator | docker-init: 2026-02-28 00:17:02.880590 | orchestrator | Version: 0.19.0 2026-02-28 00:17:02.880602 | orchestrator | GitCommit: de40ad0 2026-02-28 00:17:02.883341 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-28 00:17:02.893109 | orchestrator | + set -e 2026-02-28 00:17:02.893219 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:17:02.893234 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:17:02.893247 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:17:02.893258 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:17:02.893269 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:17:02.893280 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:17:02.893292 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:17:02.893303 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-28 00:17:02.893314 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-28 00:17:02.893325 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-28 00:17:02.893336 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-28 00:17:02.893357 | orchestrator | ++ export ARA=false 2026-02-28 00:17:02.893368 | orchestrator | ++ ARA=false 2026-02-28 00:17:02.893379 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:17:02.893390 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:17:02.893411 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:17:02.893422 | orchestrator | ++ TEMPEST=true 2026-02-28 00:17:02.893443 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:17:02.893454 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:17:02.893465 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:17:02.893476 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:17:02.893498 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:17:02.893509 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:17:02.893520 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:17:02.893531 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:17:02.893542 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:17:02.893556 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:17:02.893579 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:17:02.893590 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:17:02.893601 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:17:02.893612 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:17:02.893623 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:17:02.893633 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:17:02.893649 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:17:02.893950 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-28 00:17:02.893983 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:17:02.893995 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-02-28 00:17:02.901509 | orchestrator | + set -e 2026-02-28 00:17:02.901613 | orchestrator | + VERSION=reef 2026-02-28 00:17:02.902524 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:17:02.908649 | orchestrator | + [[ -n ceph_version: reef ]] 2026-02-28 00:17:02.908726 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:17:02.914355 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-02-28 00:17:02.921583 | orchestrator | + set -e 2026-02-28 00:17:02.921650 | orchestrator | + VERSION=2024.2 2026-02-28 00:17:02.922381 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:17:02.926078 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-02-28 00:17:02.926131 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-02-28 00:17:02.931154 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-28 00:17:02.932348 | orchestrator | ++ semver latest 7.0.0 2026-02-28 00:17:02.993004 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:17:02.993079 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:17:02.993086 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-28 00:17:02.994225 | orchestrator | ++ semver latest 10.0.0-0 2026-02-28 00:17:03.053803 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:17:03.054494 | orchestrator | ++ semver 2024.2 2025.1 2026-02-28 00:17:03.106149 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:17:03.106248 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-28 00:17:03.189626 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:17:03.190725 | orchestrator | + source /opt/venv/bin/activate 2026-02-28 00:17:03.192006 | orchestrator | ++ deactivate nondestructive 2026-02-28 00:17:03.192048 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:17:03.192059 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:17:03.192068 | orchestrator | ++ hash -r 2026-02-28 00:17:03.192083 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:17:03.192092 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-28 00:17:03.192101 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-28 00:17:03.192113 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-28 00:17:03.192453 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-28 00:17:03.192468 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-28 00:17:03.192477 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-28 00:17:03.192486 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-28 00:17:03.192496 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:17:03.192505 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:17:03.192514 | orchestrator | ++ export PATH 2026-02-28 00:17:03.192523 | orchestrator | ++ '[' -n '' ']' 2026-02-28 00:17:03.192532 | orchestrator | ++ '[' -z '' ']' 2026-02-28 00:17:03.192565 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-28 00:17:03.192575 | orchestrator | ++ PS1='(venv) ' 2026-02-28 00:17:03.192584 | orchestrator | ++ export PS1 2026-02-28 00:17:03.192593 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-28 00:17:03.192602 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-28 00:17:03.192611 | orchestrator | ++ hash -r 2026-02-28 00:17:03.192806 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-28 00:17:04.382339 | orchestrator | 2026-02-28 00:17:04.382434 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-28 00:17:04.382446 | orchestrator | 2026-02-28 00:17:04.382453 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:17:04.945046 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:04.945140 | orchestrator | 2026-02-28 00:17:04.945150 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-28 00:17:05.919203 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:05.919287 | orchestrator | 2026-02-28 00:17:05.919299 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-28 00:17:05.919308 | orchestrator | 2026-02-28 00:17:05.919317 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:17:08.182589 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:08.183948 | orchestrator | 2026-02-28 00:17:08.184038 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-28 00:17:08.230828 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:08.230965 | orchestrator | 2026-02-28 00:17:08.230986 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-28 00:17:08.697953 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:08.698085 | orchestrator | 2026-02-28 00:17:08.698102 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-28 00:17:08.741851 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:08.741998 | orchestrator | 2026-02-28 00:17:08.742070 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-28 00:17:09.073859 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:09.074073 | orchestrator | 2026-02-28 00:17:09.074093 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-28 00:17:09.413511 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:09.413617 | orchestrator | 2026-02-28 00:17:09.413633 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-28 00:17:09.527406 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:09.527503 | orchestrator | 2026-02-28 00:17:09.527516 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-28 00:17:09.527527 | orchestrator | 2026-02-28 00:17:09.527536 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:17:13.240554 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:13.240676 | orchestrator | 2026-02-28 00:17:13.240703 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-28 00:17:13.341709 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-28 00:17:13.341964 | orchestrator | 2026-02-28 00:17:13.342451 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-28 00:17:13.405983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-28 00:17:13.406132 | orchestrator | 2026-02-28 00:17:13.406149 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-28 00:17:14.503649 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-28 00:17:14.503725 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-28 00:17:14.503734 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-28 00:17:14.503740 | orchestrator | 2026-02-28 00:17:14.503746 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-28 00:17:16.293500 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-28 00:17:16.293598 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-28 00:17:16.293614 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-28 00:17:16.293627 | orchestrator | 2026-02-28 00:17:16.293640 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-28 00:17:16.922187 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:17:16.922288 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:16.922302 | orchestrator | 2026-02-28 00:17:16.922315 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-28 00:17:17.538568 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:17:17.538644 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:17.538653 | orchestrator | 2026-02-28 00:17:17.538661 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-28 00:17:17.596735 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:17.596826 | orchestrator | 2026-02-28 00:17:17.596840 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-28 00:17:17.948820 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:17.948916 | orchestrator | 2026-02-28 00:17:17.948985 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-28 00:17:18.023278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-28 00:17:18.023377 | orchestrator | 2026-02-28 00:17:18.023402 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-28 00:17:19.118813 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:19.118909 | orchestrator | 2026-02-28 00:17:19.118921 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-28 00:17:19.927394 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:19.927470 | orchestrator | 2026-02-28 00:17:19.927496 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-28 00:17:30.350443 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:30.350562 | orchestrator | 2026-02-28 00:17:30.350604 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-28 00:17:30.401196 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:30.401292 | orchestrator | 2026-02-28 00:17:30.401307 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-28 00:17:30.401319 | orchestrator | 2026-02-28 00:17:30.401330 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:17:32.137136 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:32.137240 | orchestrator | 2026-02-28 00:17:32.137286 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-28 00:17:32.236523 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-28 00:17:32.236616 | orchestrator | 2026-02-28 00:17:32.236631 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-28 00:17:32.292333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:17:32.292431 | orchestrator | 2026-02-28 00:17:32.292447 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-28 00:17:34.629144 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:34.629231 | orchestrator | 2026-02-28 00:17:34.629239 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-28 00:17:34.664458 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:34.664540 | orchestrator | 2026-02-28 00:17:34.664549 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-28 00:17:34.785919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-28 00:17:34.786085 | orchestrator | 2026-02-28 00:17:34.786102 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-28 00:17:37.597792 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-28 00:17:37.597897 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-28 00:17:37.597914 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-28 00:17:37.597927 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-28 00:17:37.597938 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-28 00:17:37.597949 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-28 00:17:37.597960 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-28 00:17:37.597971 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-28 00:17:37.598101 | orchestrator | 2026-02-28 00:17:37.598121 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-28 00:17:38.225681 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:38.225754 | orchestrator | 2026-02-28 00:17:38.225761 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-28 00:17:38.858690 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:38.858772 | orchestrator | 2026-02-28 00:17:38.858781 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-28 00:17:38.939758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-28 00:17:38.939844 | orchestrator | 2026-02-28 00:17:38.939856 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-28 00:17:40.124660 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-28 00:17:40.124761 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-28 00:17:40.124775 | orchestrator | 2026-02-28 00:17:40.124788 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-28 00:17:40.743412 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:40.743512 | orchestrator | 2026-02-28 00:17:40.743530 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-28 00:17:40.794304 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:40.794398 | orchestrator | 2026-02-28 00:17:40.794414 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-28 00:17:40.872271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-28 00:17:40.872369 | orchestrator | 2026-02-28 00:17:40.872384 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-28 00:17:41.480511 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:41.480627 | orchestrator | 2026-02-28 00:17:41.480648 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-28 00:17:41.540539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-28 00:17:41.540638 | orchestrator | 2026-02-28 00:17:41.540645 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-28 00:17:42.857870 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:17:42.857981 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:17:42.857997 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:42.858119 | orchestrator | 2026-02-28 00:17:42.858132 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-28 00:17:43.445116 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:43.445177 | orchestrator | 2026-02-28 00:17:43.445187 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-28 00:17:43.506150 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:43.506234 | orchestrator | 2026-02-28 00:17:43.506251 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-28 00:17:43.599888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-28 00:17:43.599987 | orchestrator | 2026-02-28 00:17:43.600081 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-28 00:17:44.118365 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:44.118455 | orchestrator | 2026-02-28 00:17:44.118488 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-28 00:17:44.526246 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:44.526311 | orchestrator | 2026-02-28 00:17:44.526321 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-28 00:17:45.744111 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-28 00:17:45.744208 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-28 00:17:45.744225 | orchestrator | 2026-02-28 00:17:45.744239 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-28 00:17:46.347227 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:46.347315 | orchestrator | 2026-02-28 00:17:46.347331 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-28 00:17:46.699401 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:46.699477 | orchestrator | 2026-02-28 00:17:46.699493 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-28 00:17:47.037009 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:47.037123 | orchestrator | 2026-02-28 00:17:47.037148 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-28 00:17:47.088364 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:47.088443 | orchestrator | 2026-02-28 00:17:47.088459 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-28 00:17:47.156193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-28 00:17:47.156290 | orchestrator | 2026-02-28 00:17:47.156311 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-28 00:17:47.202091 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:47.202154 | orchestrator | 2026-02-28 00:17:47.202163 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-28 00:17:49.194319 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-28 00:17:49.194423 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-28 00:17:49.194437 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-28 00:17:49.194448 | orchestrator | 2026-02-28 00:17:49.194460 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-28 00:17:49.889164 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:49.889230 | orchestrator | 2026-02-28 00:17:49.889236 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-28 00:17:50.583530 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:50.583609 | orchestrator | 2026-02-28 00:17:50.583619 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-28 00:17:51.281546 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:51.281641 | orchestrator | 2026-02-28 00:17:51.281654 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-28 00:17:51.349730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-28 00:17:51.349832 | orchestrator | 2026-02-28 00:17:51.349848 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-28 00:17:51.380776 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:51.380900 | orchestrator | 2026-02-28 00:17:51.380923 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-28 00:17:52.067426 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-28 00:17:52.067555 | orchestrator | 2026-02-28 00:17:52.067581 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-28 00:17:52.158512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-28 00:17:52.158609 | orchestrator | 2026-02-28 00:17:52.158624 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-28 00:17:52.860876 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:52.860978 | orchestrator | 2026-02-28 00:17:52.860994 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-28 00:17:53.454533 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:53.454636 | orchestrator | 2026-02-28 00:17:53.454653 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-28 00:17:53.510971 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:17:53.511098 | orchestrator | 2026-02-28 00:17:53.511110 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-28 00:17:53.566945 | orchestrator | ok: [testbed-manager] 2026-02-28 00:17:53.567020 | orchestrator | 2026-02-28 00:17:53.567046 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-28 00:17:54.374317 | orchestrator | changed: [testbed-manager] 2026-02-28 00:17:54.374412 | orchestrator | 2026-02-28 00:17:54.374426 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-28 00:19:02.040394 | orchestrator | changed: [testbed-manager] 2026-02-28 00:19:02.040517 | orchestrator | 2026-02-28 00:19:02.040536 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-28 00:19:03.062476 | orchestrator | ok: [testbed-manager] 2026-02-28 00:19:03.062571 | orchestrator | 2026-02-28 00:19:03.062584 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-28 00:19:03.123291 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:19:03.123443 | orchestrator | 2026-02-28 00:19:03.123459 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-28 00:19:05.535214 | orchestrator | changed: [testbed-manager] 2026-02-28 00:19:05.535344 | orchestrator | 2026-02-28 00:19:05.535370 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-28 00:19:05.632724 | orchestrator | ok: [testbed-manager] 2026-02-28 00:19:05.632816 | orchestrator | 2026-02-28 00:19:05.632850 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-28 00:19:05.632864 | orchestrator | 2026-02-28 00:19:05.632875 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-28 00:19:05.679946 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:19:05.680024 | orchestrator | 2026-02-28 00:19:05.680038 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-28 00:20:05.735856 | orchestrator | Pausing for 60 seconds 2026-02-28 00:20:05.735950 | orchestrator | changed: [testbed-manager] 2026-02-28 00:20:05.735963 | orchestrator | 2026-02-28 00:20:05.735974 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-28 00:20:08.786930 | orchestrator | changed: [testbed-manager] 2026-02-28 00:20:08.787023 | orchestrator | 2026-02-28 00:20:08.787045 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-28 00:20:50.270099 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-28 00:20:50.270217 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-28 00:20:50.270234 | orchestrator | changed: [testbed-manager] 2026-02-28 00:20:50.270277 | orchestrator | 2026-02-28 00:20:50.270289 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-28 00:21:00.666174 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:00.666289 | orchestrator | 2026-02-28 00:21:00.666306 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-28 00:21:00.742717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-28 00:21:00.742798 | orchestrator | 2026-02-28 00:21:00.742809 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-28 00:21:00.742818 | orchestrator | 2026-02-28 00:21:00.742827 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-28 00:21:00.799199 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:00.799325 | orchestrator | 2026-02-28 00:21:00.799351 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-28 00:21:00.872981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-28 00:21:00.873085 | orchestrator | 2026-02-28 00:21:00.873103 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-28 00:21:01.623541 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:01.623640 | orchestrator | 2026-02-28 00:21:01.623657 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-28 00:21:04.785142 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:04.785241 | orchestrator | 2026-02-28 00:21:04.785255 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-28 00:21:04.855720 | orchestrator | ok: [testbed-manager] => { 2026-02-28 00:21:04.855797 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-28 00:21:04.855807 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-28 00:21:04.855816 | orchestrator | "Checking running containers against expected versions...", 2026-02-28 00:21:04.855824 | orchestrator | "", 2026-02-28 00:21:04.855835 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-28 00:21:04.855843 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-28 00:21:04.855850 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.855858 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-28 00:21:04.855865 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.855873 | orchestrator | "", 2026-02-28 00:21:04.855880 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-28 00:21:04.855888 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-02-28 00:21:04.855895 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.855902 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-02-28 00:21:04.855910 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.855917 | orchestrator | "", 2026-02-28 00:21:04.855924 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-28 00:21:04.855931 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-28 00:21:04.855939 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.855946 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-28 00:21:04.855953 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.855960 | orchestrator | "", 2026-02-28 00:21:04.855968 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-28 00:21:04.855975 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-28 00:21:04.855983 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.855990 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-28 00:21:04.855997 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856004 | orchestrator | "", 2026-02-28 00:21:04.856012 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-28 00:21:04.856019 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-28 00:21:04.856047 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856055 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-28 00:21:04.856062 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856069 | orchestrator | "", 2026-02-28 00:21:04.856077 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-28 00:21:04.856084 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856091 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856098 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856105 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856113 | orchestrator | "", 2026-02-28 00:21:04.856122 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-28 00:21:04.856134 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-28 00:21:04.856153 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856168 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-28 00:21:04.856193 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856205 | orchestrator | "", 2026-02-28 00:21:04.856215 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-28 00:21:04.856225 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-28 00:21:04.856235 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856245 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-28 00:21:04.856255 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856266 | orchestrator | "", 2026-02-28 00:21:04.856289 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-28 00:21:04.856303 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-02-28 00:21:04.856319 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856332 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-02-28 00:21:04.856348 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856361 | orchestrator | "", 2026-02-28 00:21:04.856373 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-28 00:21:04.856382 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-28 00:21:04.856399 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856408 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-28 00:21:04.856416 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856424 | orchestrator | "", 2026-02-28 00:21:04.856433 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-28 00:21:04.856441 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856450 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856459 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856488 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856497 | orchestrator | "", 2026-02-28 00:21:04.856504 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-28 00:21:04.856511 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856518 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856525 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856533 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856540 | orchestrator | "", 2026-02-28 00:21:04.856547 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-28 00:21:04.856554 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856561 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856569 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856576 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856583 | orchestrator | "", 2026-02-28 00:21:04.856590 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-28 00:21:04.856597 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856605 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856612 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856627 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856635 | orchestrator | "", 2026-02-28 00:21:04.856642 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-28 00:21:04.856666 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856674 | orchestrator | " Enabled: true", 2026-02-28 00:21:04.856681 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-28 00:21:04.856688 | orchestrator | " Status: ✅ MATCH", 2026-02-28 00:21:04.856695 | orchestrator | "", 2026-02-28 00:21:04.856702 | orchestrator | "=== Summary ===", 2026-02-28 00:21:04.856709 | orchestrator | "Errors (version mismatches): 0", 2026-02-28 00:21:04.856717 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-28 00:21:04.856724 | orchestrator | "", 2026-02-28 00:21:04.856731 | orchestrator | "✅ All running containers match expected versions!" 2026-02-28 00:21:04.856738 | orchestrator | ] 2026-02-28 00:21:04.856745 | orchestrator | } 2026-02-28 00:21:04.856753 | orchestrator | 2026-02-28 00:21:04.856761 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-28 00:21:04.915440 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:04.915532 | orchestrator | 2026-02-28 00:21:04.915541 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:04.915550 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-28 00:21:04.915556 | orchestrator | 2026-02-28 00:21:05.012340 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-28 00:21:05.012447 | orchestrator | + deactivate 2026-02-28 00:21:05.012525 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-28 00:21:05.012549 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-28 00:21:05.012566 | orchestrator | + export PATH 2026-02-28 00:21:05.012581 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-28 00:21:05.012595 | orchestrator | + '[' -n '' ']' 2026-02-28 00:21:05.012610 | orchestrator | + hash -r 2026-02-28 00:21:05.012625 | orchestrator | + '[' -n '' ']' 2026-02-28 00:21:05.012650 | orchestrator | + unset VIRTUAL_ENV 2026-02-28 00:21:05.012664 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-28 00:21:05.012679 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-28 00:21:05.012693 | orchestrator | + unset -f deactivate 2026-02-28 00:21:05.012708 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-28 00:21:05.021358 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-28 00:21:05.021413 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-28 00:21:05.021427 | orchestrator | + local max_attempts=60 2026-02-28 00:21:05.021438 | orchestrator | + local name=ceph-ansible 2026-02-28 00:21:05.021450 | orchestrator | + local attempt_num=1 2026-02-28 00:21:05.022198 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:21:05.059532 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:05.059604 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-28 00:21:05.059617 | orchestrator | + local max_attempts=60 2026-02-28 00:21:05.059628 | orchestrator | + local name=kolla-ansible 2026-02-28 00:21:05.059640 | orchestrator | + local attempt_num=1 2026-02-28 00:21:05.059651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-28 00:21:05.098924 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:05.099011 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-28 00:21:05.099025 | orchestrator | + local max_attempts=60 2026-02-28 00:21:05.099037 | orchestrator | + local name=osism-ansible 2026-02-28 00:21:05.099048 | orchestrator | + local attempt_num=1 2026-02-28 00:21:05.099646 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-28 00:21:05.136360 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:21:05.136456 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-28 00:21:05.136538 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-28 00:21:05.777194 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-28 00:21:05.947664 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-28 00:21:05.947785 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-02-28 00:21:05.947801 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-02-28 00:21:05.947812 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-28 00:21:05.947823 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-02-28 00:21:05.947833 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:05.947843 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:05.947852 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2026-02-28 00:21:05.947878 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:05.947888 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-02-28 00:21:05.947898 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:05.947907 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-02-28 00:21:05.947917 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-02-28 00:21:05.947926 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-28 00:21:05.947936 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-02-28 00:21:05.947946 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-02-28 00:21:05.953087 | orchestrator | ++ semver latest 7.0.0 2026-02-28 00:21:05.995345 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:21:05.995419 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:21:05.995434 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-28 00:21:06.000364 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-28 00:21:18.180993 | orchestrator | 2026-02-28 00:21:18 | INFO  | Prepare task for execution of resolvconf. 2026-02-28 00:21:18.388052 | orchestrator | 2026-02-28 00:21:18 | INFO  | Task 4064dc86-0f68-4100-a52f-8c31ae1528c2 (resolvconf) was prepared for execution. 2026-02-28 00:21:18.388180 | orchestrator | 2026-02-28 00:21:18 | INFO  | It takes a moment until task 4064dc86-0f68-4100-a52f-8c31ae1528c2 (resolvconf) has been started and output is visible here. 2026-02-28 00:21:31.932990 | orchestrator | 2026-02-28 00:21:31.933101 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-28 00:21:31.933117 | orchestrator | 2026-02-28 00:21:31.933129 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:21:31.933141 | orchestrator | Saturday 28 February 2026 00:21:22 +0000 (0:00:00.142) 0:00:00.142 ***** 2026-02-28 00:21:31.933152 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:31.933164 | orchestrator | 2026-02-28 00:21:31.933176 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-28 00:21:31.933188 | orchestrator | Saturday 28 February 2026 00:21:26 +0000 (0:00:03.725) 0:00:03.868 ***** 2026-02-28 00:21:31.933270 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:31.933283 | orchestrator | 2026-02-28 00:21:31.933294 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-28 00:21:31.933305 | orchestrator | Saturday 28 February 2026 00:21:26 +0000 (0:00:00.052) 0:00:03.920 ***** 2026-02-28 00:21:31.933317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-28 00:21:31.933329 | orchestrator | 2026-02-28 00:21:31.933340 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-28 00:21:31.933351 | orchestrator | Saturday 28 February 2026 00:21:26 +0000 (0:00:00.084) 0:00:04.004 ***** 2026-02-28 00:21:31.933373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:21:31.933385 | orchestrator | 2026-02-28 00:21:31.933396 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-28 00:21:31.933407 | orchestrator | Saturday 28 February 2026 00:21:26 +0000 (0:00:00.080) 0:00:04.085 ***** 2026-02-28 00:21:31.933418 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:31.933429 | orchestrator | 2026-02-28 00:21:31.933440 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-28 00:21:31.933451 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:01.042) 0:00:05.127 ***** 2026-02-28 00:21:31.933462 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:31.933473 | orchestrator | 2026-02-28 00:21:31.933484 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-28 00:21:31.933495 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:00.060) 0:00:05.188 ***** 2026-02-28 00:21:31.933506 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:31.933558 | orchestrator | 2026-02-28 00:21:31.933573 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-28 00:21:31.933585 | orchestrator | Saturday 28 February 2026 00:21:27 +0000 (0:00:00.497) 0:00:05.685 ***** 2026-02-28 00:21:31.933598 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:31.933610 | orchestrator | 2026-02-28 00:21:31.933623 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-28 00:21:31.933636 | orchestrator | Saturday 28 February 2026 00:21:28 +0000 (0:00:00.080) 0:00:05.765 ***** 2026-02-28 00:21:31.933648 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:31.933660 | orchestrator | 2026-02-28 00:21:31.933673 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-28 00:21:31.933685 | orchestrator | Saturday 28 February 2026 00:21:28 +0000 (0:00:00.509) 0:00:06.275 ***** 2026-02-28 00:21:31.933697 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:31.933709 | orchestrator | 2026-02-28 00:21:31.933722 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-28 00:21:31.933734 | orchestrator | Saturday 28 February 2026 00:21:29 +0000 (0:00:01.026) 0:00:07.302 ***** 2026-02-28 00:21:31.933746 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:31.933759 | orchestrator | 2026-02-28 00:21:31.933794 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-28 00:21:31.933808 | orchestrator | Saturday 28 February 2026 00:21:30 +0000 (0:00:00.930) 0:00:08.233 ***** 2026-02-28 00:21:31.933820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-28 00:21:31.933833 | orchestrator | 2026-02-28 00:21:31.933846 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-28 00:21:31.933858 | orchestrator | Saturday 28 February 2026 00:21:30 +0000 (0:00:00.061) 0:00:08.294 ***** 2026-02-28 00:21:31.933871 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:31.933883 | orchestrator | 2026-02-28 00:21:31.933896 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:31.933910 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:21:31.933921 | orchestrator | 2026-02-28 00:21:31.933931 | orchestrator | 2026-02-28 00:21:31.933942 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:21:31.933953 | orchestrator | Saturday 28 February 2026 00:21:31 +0000 (0:00:01.147) 0:00:09.441 ***** 2026-02-28 00:21:31.933964 | orchestrator | =============================================================================== 2026-02-28 00:21:31.933974 | orchestrator | Gathering Facts --------------------------------------------------------- 3.73s 2026-02-28 00:21:31.933985 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2026-02-28 00:21:31.933996 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2026-02-28 00:21:31.934006 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2026-02-28 00:21:31.934078 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-02-28 00:21:31.934093 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2026-02-28 00:21:31.934124 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-02-28 00:21:31.934136 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-02-28 00:21:31.934147 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-28 00:21:31.934158 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-28 00:21:31.934168 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2026-02-28 00:21:31.934179 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-28 00:21:31.934190 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-02-28 00:21:32.221849 | orchestrator | + osism apply sshconfig 2026-02-28 00:21:44.248990 | orchestrator | 2026-02-28 00:21:44 | INFO  | Prepare task for execution of sshconfig. 2026-02-28 00:21:44.327400 | orchestrator | 2026-02-28 00:21:44 | INFO  | Task eb995961-ccb5-471b-8d2d-53ea7c988fdc (sshconfig) was prepared for execution. 2026-02-28 00:21:44.327494 | orchestrator | 2026-02-28 00:21:44 | INFO  | It takes a moment until task eb995961-ccb5-471b-8d2d-53ea7c988fdc (sshconfig) has been started and output is visible here. 2026-02-28 00:21:55.076324 | orchestrator | 2026-02-28 00:21:55.076440 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-28 00:21:55.076458 | orchestrator | 2026-02-28 00:21:55.076471 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-28 00:21:55.076482 | orchestrator | Saturday 28 February 2026 00:21:48 +0000 (0:00:00.123) 0:00:00.123 ***** 2026-02-28 00:21:55.076494 | orchestrator | ok: [testbed-manager] 2026-02-28 00:21:55.076506 | orchestrator | 2026-02-28 00:21:55.076517 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-28 00:21:55.076528 | orchestrator | Saturday 28 February 2026 00:21:48 +0000 (0:00:00.469) 0:00:00.592 ***** 2026-02-28 00:21:55.076654 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:55.076679 | orchestrator | 2026-02-28 00:21:55.076698 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-28 00:21:55.076718 | orchestrator | Saturday 28 February 2026 00:21:49 +0000 (0:00:00.413) 0:00:01.006 ***** 2026-02-28 00:21:55.076738 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:21:55.076759 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:21:55.076781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:21:55.076801 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:21:55.076821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:21:55.076833 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:21:55.076843 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:21:55.076854 | orchestrator | 2026-02-28 00:21:55.076864 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-28 00:21:55.076875 | orchestrator | Saturday 28 February 2026 00:21:54 +0000 (0:00:05.034) 0:00:06.040 ***** 2026-02-28 00:21:55.076885 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:21:55.076896 | orchestrator | 2026-02-28 00:21:55.076906 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-28 00:21:55.076917 | orchestrator | Saturday 28 February 2026 00:21:54 +0000 (0:00:00.085) 0:00:06.125 ***** 2026-02-28 00:21:55.076927 | orchestrator | changed: [testbed-manager] 2026-02-28 00:21:55.076938 | orchestrator | 2026-02-28 00:21:55.076949 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:21:55.076962 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:21:55.076974 | orchestrator | 2026-02-28 00:21:55.076985 | orchestrator | 2026-02-28 00:21:55.076996 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:21:55.077007 | orchestrator | Saturday 28 February 2026 00:21:54 +0000 (0:00:00.539) 0:00:06.664 ***** 2026-02-28 00:21:55.077018 | orchestrator | =============================================================================== 2026-02-28 00:21:55.077029 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.03s 2026-02-28 00:21:55.077040 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-02-28 00:21:55.077050 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2026-02-28 00:21:55.077061 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.41s 2026-02-28 00:21:55.077072 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-02-28 00:21:55.340330 | orchestrator | + osism apply known-hosts 2026-02-28 00:22:07.381739 | orchestrator | 2026-02-28 00:22:07 | INFO  | Prepare task for execution of known-hosts. 2026-02-28 00:22:07.447872 | orchestrator | 2026-02-28 00:22:07 | INFO  | Task 2b8b587f-3305-4718-b43b-5df9c4e427b4 (known-hosts) was prepared for execution. 2026-02-28 00:22:07.447971 | orchestrator | 2026-02-28 00:22:07 | INFO  | It takes a moment until task 2b8b587f-3305-4718-b43b-5df9c4e427b4 (known-hosts) has been started and output is visible here. 2026-02-28 00:22:23.295145 | orchestrator | 2026-02-28 00:22:23.295249 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-28 00:22:23.295266 | orchestrator | 2026-02-28 00:22:23.295277 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-28 00:22:23.295288 | orchestrator | Saturday 28 February 2026 00:22:11 +0000 (0:00:00.158) 0:00:00.158 ***** 2026-02-28 00:22:23.295299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:23.295309 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:23.295319 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:23.295347 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:23.295357 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:23.295367 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:23.295376 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:23.295385 | orchestrator | 2026-02-28 00:22:23.295395 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-28 00:22:23.295405 | orchestrator | Saturday 28 February 2026 00:22:17 +0000 (0:00:05.907) 0:00:06.065 ***** 2026-02-28 00:22:23.295425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-28 00:22:23.295438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-28 00:22:23.295448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-28 00:22:23.295458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-28 00:22:23.295467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-28 00:22:23.295476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-28 00:22:23.295486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-28 00:22:23.295495 | orchestrator | 2026-02-28 00:22:23.295505 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:23.295514 | orchestrator | Saturday 28 February 2026 00:22:17 +0000 (0:00:00.195) 0:00:06.260 ***** 2026-02-28 00:22:23.295534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkDCSE1lChD5zAF1CKZdS6flG6K9fXuqCdNN90LXtvSWApWttgZGK1kEdvjmFxKB+055ooWxFPBRR4JhMkkFUSlzSdYWuRT2epFJd2GpHNtwvr5Z16KF8G2q0wHJJM/I54/rP/b/xzwTE6H5rZukgyn+miPoaLW1e+zZ5GBTRJPaSqXIyZ+Zm9P2bdv70KcDzVTM4kCWiB4qOwKdniM5V1703jjEGRjuDYoQH6rNbCBhc8LoTZNJtHtjMMgGiVH2g8ncnvWtI63i9X4IeD3KCqsqk6Zfb2OrakkMxfdbKp13xfPr9ijhOBoSxIWcrc5ogxoytVUwAuqmiIr99PO7K+w5JIe9YCvMyQO0HISa0Cd4OUgdqj230QuU37noYyVArdn6C4vWeu6LM8QhfavrjaI1wp+l+TNm6jaozmHMDsE1czRRuL9LwnKWf3eKp1OzuagXK5cm8qm4KuT2GY9EIsBLX6RMfqNVNtEswQlBffP5zD2xMu9C3H6ENLVhZY2Mc=) 2026-02-28 00:22:23.295555 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPa72MyZCm2cMhx2X5vwY/ymj1ID8sr+fpD1dA8cwE9AV6ztFfk/xrWIa+CJQ/W/EXUkuFCCCDyylUF671hRKwc=) 2026-02-28 00:22:23.295575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIafbOsJG3FZ7Hq2tPM42YFH3UKlJqrIhtErQq+lp2UW) 2026-02-28 00:22:23.295593 | orchestrator | 2026-02-28 00:22:23.295681 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:23.295700 | orchestrator | Saturday 28 February 2026 00:22:18 +0000 (0:00:01.149) 0:00:07.410 ***** 2026-02-28 00:22:23.295718 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAzcGbTiEIh8mcSA1NhxZNNVOQBPS8QCH7j8L1VtNsr7) 2026-02-28 00:22:23.295776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXw/wPaozh4eSHc9CiuCSVA6IPxVyrtpVP6KxOS/rK3Pxc9QpVFuk4vY7StLiZ1QqQtXPJ934DxdE4jeH/lJo/AZZ/NDPu4Uc9jlq9luj3pz7MLWNCXv4Vtlh6OsF1Myja6W3EkDVQH7XYYUTzf8+tgyfv1cN9fF1nhQrTd0I8O2NYqhHv5hpfebwJCDEbOjqMbwPTvAyoPSGhw5KQU4mqzblxJRpAgAO8ok/ISmgGO9MWEjUvQ+s4RaclXgHlDm4QOadt0NnlhH8ot4xs8L9WVb5ih0fiBm8DdpnjDQivgRN6Tjj90+2jHCzr9roWHNuVh8gDv5wlfj7siWX6J6dv5Nz681oNw+DqSnws0qMwdsxpLrBv2KesIPe8RuVyhI9O2t4EHpabd9gIT/91w+s9dzZTiUiJubCiYNKWRRm0sEKG6Ze4TTfuGvM9HUIDmAcZYc5FyjZvE+mP6NsbmG65llK///H0JpDzos/drvZ7jbM9OLldM7s02OLhDye4pSM=) 2026-02-28 00:22:23.295813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCQm0tS6/rkVL14BmXiBos30tlkjpDEcE6fTNaEnu86xzC3cpTEDUc+/jkkf8RUFVBa+bDrZvkZqs1COL2a2Djc=) 2026-02-28 00:22:23.295826 | orchestrator | 2026-02-28 00:22:23.295837 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:23.295848 | orchestrator | Saturday 28 February 2026 00:22:19 +0000 (0:00:01.025) 0:00:08.435 ***** 2026-02-28 00:22:23.295860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1nBPrlaPMuP/ScRqmR5LXew3S5O+AM/F26EDCDN7H2SePu8x+Ea4FNW3rbKynA7/lWB0yFv0GGBE2rqiWChS6HlUYOckElH/QQEPjdezgaD43HNuo+ecaVoAyZn7i1dwiHSM8xKomaquMCNUgosl7/zoQOCSSvK1V5lRIetOQLhnGCnzPr/smz/MrtE/Jv10PYZbUNBQDUnDZmvcnIj19d7L961bW9ukoEr+0ST/BKFScLsxrESDg2zlfMw6nTHG4t0B8VI9+LPNJmtiojqmJq69hVAxdJDGNArxi7EscRPmYtyLLSrCbiFb9We+2LQr4PXx8H1/lJwauIsCIeMlUh7C+Z6nxrMIVRqH/SCv6D7JNNYaWZHTasSx/zM1kUhdUR27Q6g3XSZbGbNIxyoWuGSjfe59KIVgjDh1zbF8XBNnOgK1e9B5eLlhOAVgYscRRvNCfAb6w7Gu+ujJfmWvAAWlsQ3cYakFKgkG2YbbX0LMj6c4YnAQGK2fAnfMd9ec=) 2026-02-28 00:22:23.295872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJksaXQldTwjbHAAkbUNinUEe3CL27wogyqlLCGtdLn4) 2026-02-28 00:22:23.295949 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOk4M/o3ADLrrEkjBmxYwmsC+2qXqDqJzh6SYK0hO+OsMaPGt/pG+mn+v1t/wqyjLh0hLRtY2HOSON4c0JMm304=) 2026-02-28 00:22:23.295961 | orchestrator | 2026-02-28 00:22:23.295972 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:23.295983 | orchestrator | Saturday 28 February 2026 00:22:20 +0000 (0:00:01.039) 0:00:09.475 ***** 2026-02-28 00:22:23.295999 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwPkXQzrRmoMj9aIUKoRjgRpo3jqeuduEPv2HBAHUcD3Nt1aQ5l0LYwjbUtGPYMCItmwpUHlmtvORAr6XrXjUM=) 2026-02-28 00:22:23.296011 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCasNkFM38exCnJKlmej7GfySrbxwIhWyCKAWZtvuM0/Z8jLQ3W4gqVyeo6ReUFS/7qo/+gjdAEafINCqPdspYGLLNyZKSTfi0m8rdVLBrir18lzcU44plEZ4HdnWOyRFUQfekekcJ55ycaa21w+ulZqg/Nw8KgcoqYz7z6iBm3fcyUAjyKlx67JURjKxBrUcgK3TjK03OnpYuOxGEmW+doSmypTTfXyjs0EYuIk3UeNz9GbephjWfFjG9N7ULYu5Kana7J49v/d74O/C+ZMPS5fwx7B4qVWtugCQZ+iSkGu/ZWs/e7KOr6JtXJkpQaAzzX0H2l1J84HeiAy6fTPojlDfK8eIAYykR+HMKy1v1k7G6biGAQ1wx8r3kV71ebG48esBvwyDV5xAj8h3F2+rl9r9X0084RhaO/1jjz04orgJGwsi4JL5XKRIUzIp8hLt+n2A5HVawttmuKmOi8jmurH4D+uQEz+MPYKFQUSnF0xzvftCiZsWzFK5DQL6rnVO0=) 2026-02-28 00:22:23.296023 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPhLGHcSHLopIqToAjcKnOZU4Dlw8HE6DVOtMNcPhpsa) 2026-02-28 00:22:23.296033 | orchestrator | 2026-02-28 00:22:23.296042 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:23.296053 | orchestrator | Saturday 28 February 2026 00:22:21 +0000 (0:00:01.027) 0:00:10.502 ***** 2026-02-28 00:22:23.296070 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc9f7Po/aazCIFywLpRfM2gnmBxqiAGCyyys5I4HP0ngG5+EgTYvTHWXSgaKeWa2iRoKKljf9DhvknQcaxzwFHpsa82jaDplmRSb9brSSYK5NB5OzFpObLuNhifOK3dx3Y0CVi7CamYIbHuosl65sDa2BeMVG8vrUW7TIuIGri2PHnGa9JiNLPgiLIc+1CwaB+1ugX1iXN7cn+x/fmCR80n6tueFjdziENR6AugCR62AEWJsmKyJB1e4nw6cK2h9GmuOWJkdnNw4qqqH1TeMAdpG5oo0ZO/LRHO0FyT2l2+59WN5CGZOPjzlvEpbzCoPB5rDa4sCLYPj90TVZ4vYkJt70rtfIIDdRL6S8ga+wIjafavcd8/hVz6n66nfXeEZPFKKf+HpaWXgoFqcfHfhS4T34U9HSjWP1pBzHc/RnrAyRfVjrViW80u1+4GmNtvnTHfr19bHlmQ0xd1CyTFxonpLff+rSN2Qj2NZh6sPkCJlrIMbBcX6hbHgNJldh8vts=) 2026-02-28 00:22:23.296096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKzIXXZhBnXd+v+0D8hQHhtOl0kFIBzh0FuWNG8gsBvBPUDt1//W7xrcqM/NQ9oRkbTf1UrcUVF2J568GUUeBU=) 2026-02-28 00:22:23.296112 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDPOqbMTmBHtIDdvJHDbPhCRM77mgeSJPAoGSQGQOJp) 2026-02-28 00:22:23.296126 | orchestrator | 2026-02-28 00:22:23.296142 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:23.296158 | orchestrator | Saturday 28 February 2026 00:22:22 +0000 (0:00:01.015) 0:00:11.518 ***** 2026-02-28 00:22:23.296188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlODGgJ9v344M3VrzZ9qJlh2LZas+wpTeep4fgeBxvLvz278piPEeGGcwHOu4B/3F1++TYsCKcfDiBfCOrw/MAF4Kuj5kzCw6I4Lz3/p3U4JDmzT89G8WX5mV1tQsOPWwp5m2TzMkl9d1g3CrNMeOUdXWzqxZYX3tCFUOAK16ZbmpR9ZXptcrydFLPYj4Ly5SkjRxkaiwkESaOAhJL8c5n+zK+HhHQeN4ahqsO9Dc4IpU5WN+UniGboCHfNvCJSZjd6guGM/2SjGy/1l23llahHQyU1Gk6G3dfiqW5UW9MYlahlBqXAwtK2yF5khkk9HzH16OhONxBy8EbdUmCBo8L80eoWyMnOk7uKleJTt6pQ4+BbaIHAEZ1S2eKgLL4Ua0xhwP2raeQpqlPskddx+x3cAjKCK0y77jrijmMdBjNcAGbanGJ4YB8PN2Vim+NDxZ1+4o+FtkvipoH2T6K/4xmRf/stbdg/w4EdeiTob5DkXsX4rpeA6NOZV5nFsKt8dk=) 2026-02-28 00:22:34.294111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFl0tgC1/OUdT6HPSk69mZo6+JqDge/JUfvyzL/VDVaMH+jNY1VAz/3Rbi0Cpil+5s69BdSKqueY8Xd45c9uDz4=) 2026-02-28 00:22:34.294225 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBgnr27nLEVqKuFZbhy1ARv4ObyjCEAVAE3ilOYZu+gQ) 2026-02-28 00:22:34.294243 | orchestrator | 2026-02-28 00:22:34.294256 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:34.294268 | orchestrator | Saturday 28 February 2026 00:22:23 +0000 (0:00:01.034) 0:00:12.552 ***** 2026-02-28 00:22:34.294282 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2r1q8+ZsWtNhwRLvN1jZsnUYDoQ4g7Uw7kjYDZSF/QMr8ZIeNMfetEDXHwhdkVf6lceXSbexRMSxZZ3JdKBCHlbOza3wqYTQlPOxEmv8A5KLmEtCMSOk4VvHYswdLbFgChy52pnFbFmIj3sljypbhG4dlqxTFuymD/2MmJefhGKrvO2dkWVCBg4YFbxDEbbFYbdLKSfAUZ64LYl7nW/w8vxpOxHL573fOnjs0vb+b7ujmycL5nrHacFMMEi8UvkC76Xq3JcCzkvJJET5tV2dPmT6WeUC6tVQ/CP71XxhAZOycV9Q86lqfSvNoVCADH8D+9rnC80huJLT5prWyKuveEsQ8d1VRXhoRqzg9xEseWjrywjVBiyu1YenafMg3TCTDeujC3tOILmDoVukY/IDlS18KLYXv0C25kHcAJxm6gCrYE8kSGTc6GwFN/+Ma9Dvhsy3kkWouBoFpweLI9Iwxgw4/pKK5FI+YxLfIXPFAteYkH/vELMf8NqeZZVEpp8s=) 2026-02-28 00:22:34.294297 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAEsf+6IbiAeCVQr73ubIuovW/gZGek/UkFvO5tzm/dNGb56/PZwMK2CeqgPocyQSwCeT/N2iSGhTE1uitkfEzM=) 2026-02-28 00:22:34.294308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKxwfGyJcCHTEHNpILss/Ow4WrcJnIYFojFj8jQVNdF5) 2026-02-28 00:22:34.294319 | orchestrator | 2026-02-28 00:22:34.294331 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-28 00:22:34.294343 | orchestrator | Saturday 28 February 2026 00:22:24 +0000 (0:00:01.003) 0:00:13.556 ***** 2026-02-28 00:22:34.294354 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-28 00:22:34.294366 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-28 00:22:34.294377 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-28 00:22:34.294388 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-28 00:22:34.294399 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-28 00:22:34.294428 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-28 00:22:34.294440 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-28 00:22:34.294473 | orchestrator | 2026-02-28 00:22:34.294485 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-28 00:22:34.294497 | orchestrator | Saturday 28 February 2026 00:22:30 +0000 (0:00:05.105) 0:00:18.662 ***** 2026-02-28 00:22:34.294509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-28 00:22:34.294521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-28 00:22:34.294532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-28 00:22:34.294543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-28 00:22:34.294554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-28 00:22:34.294565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-28 00:22:34.294575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-28 00:22:34.294586 | orchestrator | 2026-02-28 00:22:34.294597 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:34.294608 | orchestrator | Saturday 28 February 2026 00:22:30 +0000 (0:00:00.179) 0:00:18.842 ***** 2026-02-28 00:22:34.294685 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkDCSE1lChD5zAF1CKZdS6flG6K9fXuqCdNN90LXtvSWApWttgZGK1kEdvjmFxKB+055ooWxFPBRR4JhMkkFUSlzSdYWuRT2epFJd2GpHNtwvr5Z16KF8G2q0wHJJM/I54/rP/b/xzwTE6H5rZukgyn+miPoaLW1e+zZ5GBTRJPaSqXIyZ+Zm9P2bdv70KcDzVTM4kCWiB4qOwKdniM5V1703jjEGRjuDYoQH6rNbCBhc8LoTZNJtHtjMMgGiVH2g8ncnvWtI63i9X4IeD3KCqsqk6Zfb2OrakkMxfdbKp13xfPr9ijhOBoSxIWcrc5ogxoytVUwAuqmiIr99PO7K+w5JIe9YCvMyQO0HISa0Cd4OUgdqj230QuU37noYyVArdn6C4vWeu6LM8QhfavrjaI1wp+l+TNm6jaozmHMDsE1czRRuL9LwnKWf3eKp1OzuagXK5cm8qm4KuT2GY9EIsBLX6RMfqNVNtEswQlBffP5zD2xMu9C3H6ENLVhZY2Mc=) 2026-02-28 00:22:34.294707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPa72MyZCm2cMhx2X5vwY/ymj1ID8sr+fpD1dA8cwE9AV6ztFfk/xrWIa+CJQ/W/EXUkuFCCCDyylUF671hRKwc=) 2026-02-28 00:22:34.294726 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIafbOsJG3FZ7Hq2tPM42YFH3UKlJqrIhtErQq+lp2UW) 2026-02-28 00:22:34.294737 | orchestrator | 2026-02-28 00:22:34.294748 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:34.294759 | orchestrator | Saturday 28 February 2026 00:22:31 +0000 (0:00:01.036) 0:00:19.878 ***** 2026-02-28 00:22:34.294771 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXw/wPaozh4eSHc9CiuCSVA6IPxVyrtpVP6KxOS/rK3Pxc9QpVFuk4vY7StLiZ1QqQtXPJ934DxdE4jeH/lJo/AZZ/NDPu4Uc9jlq9luj3pz7MLWNCXv4Vtlh6OsF1Myja6W3EkDVQH7XYYUTzf8+tgyfv1cN9fF1nhQrTd0I8O2NYqhHv5hpfebwJCDEbOjqMbwPTvAyoPSGhw5KQU4mqzblxJRpAgAO8ok/ISmgGO9MWEjUvQ+s4RaclXgHlDm4QOadt0NnlhH8ot4xs8L9WVb5ih0fiBm8DdpnjDQivgRN6Tjj90+2jHCzr9roWHNuVh8gDv5wlfj7siWX6J6dv5Nz681oNw+DqSnws0qMwdsxpLrBv2KesIPe8RuVyhI9O2t4EHpabd9gIT/91w+s9dzZTiUiJubCiYNKWRRm0sEKG6Ze4TTfuGvM9HUIDmAcZYc5FyjZvE+mP6NsbmG65llK///H0JpDzos/drvZ7jbM9OLldM7s02OLhDye4pSM=) 2026-02-28 00:22:34.294791 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCQm0tS6/rkVL14BmXiBos30tlkjpDEcE6fTNaEnu86xzC3cpTEDUc+/jkkf8RUFVBa+bDrZvkZqs1COL2a2Djc=) 2026-02-28 00:22:34.294802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAzcGbTiEIh8mcSA1NhxZNNVOQBPS8QCH7j8L1VtNsr7) 2026-02-28 00:22:34.294813 | orchestrator | 2026-02-28 00:22:34.294823 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:34.294834 | orchestrator | Saturday 28 February 2026 00:22:32 +0000 (0:00:00.996) 0:00:20.875 ***** 2026-02-28 00:22:34.294845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1nBPrlaPMuP/ScRqmR5LXew3S5O+AM/F26EDCDN7H2SePu8x+Ea4FNW3rbKynA7/lWB0yFv0GGBE2rqiWChS6HlUYOckElH/QQEPjdezgaD43HNuo+ecaVoAyZn7i1dwiHSM8xKomaquMCNUgosl7/zoQOCSSvK1V5lRIetOQLhnGCnzPr/smz/MrtE/Jv10PYZbUNBQDUnDZmvcnIj19d7L961bW9ukoEr+0ST/BKFScLsxrESDg2zlfMw6nTHG4t0B8VI9+LPNJmtiojqmJq69hVAxdJDGNArxi7EscRPmYtyLLSrCbiFb9We+2LQr4PXx8H1/lJwauIsCIeMlUh7C+Z6nxrMIVRqH/SCv6D7JNNYaWZHTasSx/zM1kUhdUR27Q6g3XSZbGbNIxyoWuGSjfe59KIVgjDh1zbF8XBNnOgK1e9B5eLlhOAVgYscRRvNCfAb6w7Gu+ujJfmWvAAWlsQ3cYakFKgkG2YbbX0LMj6c4YnAQGK2fAnfMd9ec=) 2026-02-28 00:22:34.294857 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOk4M/o3ADLrrEkjBmxYwmsC+2qXqDqJzh6SYK0hO+OsMaPGt/pG+mn+v1t/wqyjLh0hLRtY2HOSON4c0JMm304=) 2026-02-28 00:22:34.294868 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJksaXQldTwjbHAAkbUNinUEe3CL27wogyqlLCGtdLn4) 2026-02-28 00:22:34.294879 | orchestrator | 2026-02-28 00:22:34.294890 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:34.294901 | orchestrator | Saturday 28 February 2026 00:22:33 +0000 (0:00:01.025) 0:00:21.900 ***** 2026-02-28 00:22:34.294911 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPhLGHcSHLopIqToAjcKnOZU4Dlw8HE6DVOtMNcPhpsa) 2026-02-28 00:22:34.294929 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCasNkFM38exCnJKlmej7GfySrbxwIhWyCKAWZtvuM0/Z8jLQ3W4gqVyeo6ReUFS/7qo/+gjdAEafINCqPdspYGLLNyZKSTfi0m8rdVLBrir18lzcU44plEZ4HdnWOyRFUQfekekcJ55ycaa21w+ulZqg/Nw8KgcoqYz7z6iBm3fcyUAjyKlx67JURjKxBrUcgK3TjK03OnpYuOxGEmW+doSmypTTfXyjs0EYuIk3UeNz9GbephjWfFjG9N7ULYu5Kana7J49v/d74O/C+ZMPS5fwx7B4qVWtugCQZ+iSkGu/ZWs/e7KOr6JtXJkpQaAzzX0H2l1J84HeiAy6fTPojlDfK8eIAYykR+HMKy1v1k7G6biGAQ1wx8r3kV71ebG48esBvwyDV5xAj8h3F2+rl9r9X0084RhaO/1jjz04orgJGwsi4JL5XKRIUzIp8hLt+n2A5HVawttmuKmOi8jmurH4D+uQEz+MPYKFQUSnF0xzvftCiZsWzFK5DQL6rnVO0=) 2026-02-28 00:22:34.294952 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwPkXQzrRmoMj9aIUKoRjgRpo3jqeuduEPv2HBAHUcD3Nt1aQ5l0LYwjbUtGPYMCItmwpUHlmtvORAr6XrXjUM=) 2026-02-28 00:22:38.596244 | orchestrator | 2026-02-28 00:22:38.596344 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.596361 | orchestrator | Saturday 28 February 2026 00:22:34 +0000 (0:00:01.025) 0:00:22.926 ***** 2026-02-28 00:22:38.596373 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICDPOqbMTmBHtIDdvJHDbPhCRM77mgeSJPAoGSQGQOJp) 2026-02-28 00:22:38.596387 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc9f7Po/aazCIFywLpRfM2gnmBxqiAGCyyys5I4HP0ngG5+EgTYvTHWXSgaKeWa2iRoKKljf9DhvknQcaxzwFHpsa82jaDplmRSb9brSSYK5NB5OzFpObLuNhifOK3dx3Y0CVi7CamYIbHuosl65sDa2BeMVG8vrUW7TIuIGri2PHnGa9JiNLPgiLIc+1CwaB+1ugX1iXN7cn+x/fmCR80n6tueFjdziENR6AugCR62AEWJsmKyJB1e4nw6cK2h9GmuOWJkdnNw4qqqH1TeMAdpG5oo0ZO/LRHO0FyT2l2+59WN5CGZOPjzlvEpbzCoPB5rDa4sCLYPj90TVZ4vYkJt70rtfIIDdRL6S8ga+wIjafavcd8/hVz6n66nfXeEZPFKKf+HpaWXgoFqcfHfhS4T34U9HSjWP1pBzHc/RnrAyRfVjrViW80u1+4GmNtvnTHfr19bHlmQ0xd1CyTFxonpLff+rSN2Qj2NZh6sPkCJlrIMbBcX6hbHgNJldh8vts=) 2026-02-28 00:22:38.596427 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKzIXXZhBnXd+v+0D8hQHhtOl0kFIBzh0FuWNG8gsBvBPUDt1//W7xrcqM/NQ9oRkbTf1UrcUVF2J568GUUeBU=) 2026-02-28 00:22:38.596440 | orchestrator | 2026-02-28 00:22:38.596464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.596474 | orchestrator | Saturday 28 February 2026 00:22:35 +0000 (0:00:01.015) 0:00:23.942 ***** 2026-02-28 00:22:38.596484 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlODGgJ9v344M3VrzZ9qJlh2LZas+wpTeep4fgeBxvLvz278piPEeGGcwHOu4B/3F1++TYsCKcfDiBfCOrw/MAF4Kuj5kzCw6I4Lz3/p3U4JDmzT89G8WX5mV1tQsOPWwp5m2TzMkl9d1g3CrNMeOUdXWzqxZYX3tCFUOAK16ZbmpR9ZXptcrydFLPYj4Ly5SkjRxkaiwkESaOAhJL8c5n+zK+HhHQeN4ahqsO9Dc4IpU5WN+UniGboCHfNvCJSZjd6guGM/2SjGy/1l23llahHQyU1Gk6G3dfiqW5UW9MYlahlBqXAwtK2yF5khkk9HzH16OhONxBy8EbdUmCBo8L80eoWyMnOk7uKleJTt6pQ4+BbaIHAEZ1S2eKgLL4Ua0xhwP2raeQpqlPskddx+x3cAjKCK0y77jrijmMdBjNcAGbanGJ4YB8PN2Vim+NDxZ1+4o+FtkvipoH2T6K/4xmRf/stbdg/w4EdeiTob5DkXsX4rpeA6NOZV5nFsKt8dk=) 2026-02-28 00:22:38.596494 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFl0tgC1/OUdT6HPSk69mZo6+JqDge/JUfvyzL/VDVaMH+jNY1VAz/3Rbi0Cpil+5s69BdSKqueY8Xd45c9uDz4=) 2026-02-28 00:22:38.596504 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBgnr27nLEVqKuFZbhy1ARv4ObyjCEAVAE3ilOYZu+gQ) 2026-02-28 00:22:38.596513 | orchestrator | 2026-02-28 00:22:38.596523 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-28 00:22:38.596532 | orchestrator | Saturday 28 February 2026 00:22:36 +0000 (0:00:01.027) 0:00:24.969 ***** 2026-02-28 00:22:38.596542 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2r1q8+ZsWtNhwRLvN1jZsnUYDoQ4g7Uw7kjYDZSF/QMr8ZIeNMfetEDXHwhdkVf6lceXSbexRMSxZZ3JdKBCHlbOza3wqYTQlPOxEmv8A5KLmEtCMSOk4VvHYswdLbFgChy52pnFbFmIj3sljypbhG4dlqxTFuymD/2MmJefhGKrvO2dkWVCBg4YFbxDEbbFYbdLKSfAUZ64LYl7nW/w8vxpOxHL573fOnjs0vb+b7ujmycL5nrHacFMMEi8UvkC76Xq3JcCzkvJJET5tV2dPmT6WeUC6tVQ/CP71XxhAZOycV9Q86lqfSvNoVCADH8D+9rnC80huJLT5prWyKuveEsQ8d1VRXhoRqzg9xEseWjrywjVBiyu1YenafMg3TCTDeujC3tOILmDoVukY/IDlS18KLYXv0C25kHcAJxm6gCrYE8kSGTc6GwFN/+Ma9Dvhsy3kkWouBoFpweLI9Iwxgw4/pKK5FI+YxLfIXPFAteYkH/vELMf8NqeZZVEpp8s=) 2026-02-28 00:22:38.596552 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAEsf+6IbiAeCVQr73ubIuovW/gZGek/UkFvO5tzm/dNGb56/PZwMK2CeqgPocyQSwCeT/N2iSGhTE1uitkfEzM=) 2026-02-28 00:22:38.596562 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKxwfGyJcCHTEHNpILss/Ow4WrcJnIYFojFj8jQVNdF5) 2026-02-28 00:22:38.596571 | orchestrator | 2026-02-28 00:22:38.596581 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-28 00:22:38.596590 | orchestrator | Saturday 28 February 2026 00:22:37 +0000 (0:00:01.037) 0:00:26.007 ***** 2026-02-28 00:22:38.596600 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-28 00:22:38.596610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-28 00:22:38.596619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-28 00:22:38.596676 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-28 00:22:38.596686 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 00:22:38.596695 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-28 00:22:38.596705 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-28 00:22:38.596715 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:38.596724 | orchestrator | 2026-02-28 00:22:38.596751 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-28 00:22:38.596762 | orchestrator | Saturday 28 February 2026 00:22:37 +0000 (0:00:00.164) 0:00:26.172 ***** 2026-02-28 00:22:38.596779 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:38.596789 | orchestrator | 2026-02-28 00:22:38.596798 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-28 00:22:38.596808 | orchestrator | Saturday 28 February 2026 00:22:37 +0000 (0:00:00.044) 0:00:26.217 ***** 2026-02-28 00:22:38.596817 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:22:38.596827 | orchestrator | 2026-02-28 00:22:38.596836 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-28 00:22:38.596846 | orchestrator | Saturday 28 February 2026 00:22:37 +0000 (0:00:00.057) 0:00:26.275 ***** 2026-02-28 00:22:38.596855 | orchestrator | changed: [testbed-manager] 2026-02-28 00:22:38.596865 | orchestrator | 2026-02-28 00:22:38.596874 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:22:38.596884 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:22:38.596895 | orchestrator | 2026-02-28 00:22:38.596905 | orchestrator | 2026-02-28 00:22:38.596914 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:22:38.596924 | orchestrator | Saturday 28 February 2026 00:22:38 +0000 (0:00:00.710) 0:00:26.985 ***** 2026-02-28 00:22:38.596934 | orchestrator | =============================================================================== 2026-02-28 00:22:38.596943 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.91s 2026-02-28 00:22:38.596953 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.11s 2026-02-28 00:22:38.596963 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-28 00:22:38.596972 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-28 00:22:38.596982 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-28 00:22:38.596991 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-28 00:22:38.597001 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:38.597010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:38.597019 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:38.597029 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:38.597038 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:38.597048 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-28 00:22:38.597057 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-28 00:22:38.597073 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-28 00:22:38.597083 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-28 00:22:38.597092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-28 00:22:38.597102 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-02-28 00:22:38.597111 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2026-02-28 00:22:38.597122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-28 00:22:38.597131 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-02-28 00:22:38.867869 | orchestrator | + osism apply squid 2026-02-28 00:22:50.892420 | orchestrator | 2026-02-28 00:22:50 | INFO  | Prepare task for execution of squid. 2026-02-28 00:22:50.961741 | orchestrator | 2026-02-28 00:22:50 | INFO  | Task 38e7cb24-09ec-45ff-a854-1a1106693e75 (squid) was prepared for execution. 2026-02-28 00:22:50.961854 | orchestrator | 2026-02-28 00:22:50 | INFO  | It takes a moment until task 38e7cb24-09ec-45ff-a854-1a1106693e75 (squid) has been started and output is visible here. 2026-02-28 00:24:46.465657 | orchestrator | 2026-02-28 00:24:46.465769 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-28 00:24:46.465838 | orchestrator | 2026-02-28 00:24:46.465851 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-28 00:24:46.465863 | orchestrator | Saturday 28 February 2026 00:22:54 +0000 (0:00:00.119) 0:00:00.119 ***** 2026-02-28 00:24:46.465875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:24:46.465887 | orchestrator | 2026-02-28 00:24:46.465898 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-28 00:24:46.465909 | orchestrator | Saturday 28 February 2026 00:22:54 +0000 (0:00:00.081) 0:00:00.200 ***** 2026-02-28 00:24:46.465920 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:46.465931 | orchestrator | 2026-02-28 00:24:46.465942 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-28 00:24:46.465953 | orchestrator | Saturday 28 February 2026 00:22:56 +0000 (0:00:01.120) 0:00:01.321 ***** 2026-02-28 00:24:46.465964 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-28 00:24:46.465975 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-28 00:24:46.465985 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-28 00:24:46.465996 | orchestrator | 2026-02-28 00:24:46.466007 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-28 00:24:46.466077 | orchestrator | Saturday 28 February 2026 00:22:57 +0000 (0:00:01.032) 0:00:02.354 ***** 2026-02-28 00:24:46.466090 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-28 00:24:46.466101 | orchestrator | 2026-02-28 00:24:46.466112 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-28 00:24:46.466123 | orchestrator | Saturday 28 February 2026 00:22:57 +0000 (0:00:00.939) 0:00:03.293 ***** 2026-02-28 00:24:46.466134 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:46.466144 | orchestrator | 2026-02-28 00:24:46.466155 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-28 00:24:46.466166 | orchestrator | Saturday 28 February 2026 00:22:58 +0000 (0:00:00.323) 0:00:03.617 ***** 2026-02-28 00:24:46.466177 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:46.466188 | orchestrator | 2026-02-28 00:24:46.466199 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-28 00:24:46.466212 | orchestrator | Saturday 28 February 2026 00:22:59 +0000 (0:00:00.810) 0:00:04.427 ***** 2026-02-28 00:24:46.466225 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-28 00:24:46.466238 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:46.466250 | orchestrator | 2026-02-28 00:24:46.466263 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-28 00:24:46.466275 | orchestrator | Saturday 28 February 2026 00:23:29 +0000 (0:00:30.676) 0:00:35.104 ***** 2026-02-28 00:24:46.466288 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:46.466301 | orchestrator | 2026-02-28 00:24:46.466331 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-28 00:24:46.466344 | orchestrator | Saturday 28 February 2026 00:23:45 +0000 (0:00:15.685) 0:00:50.790 ***** 2026-02-28 00:24:46.466357 | orchestrator | Pausing for 60 seconds 2026-02-28 00:24:46.466370 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:46.466382 | orchestrator | 2026-02-28 00:24:46.466395 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-28 00:24:46.466408 | orchestrator | Saturday 28 February 2026 00:24:45 +0000 (0:01:00.093) 0:01:50.883 ***** 2026-02-28 00:24:46.466421 | orchestrator | ok: [testbed-manager] 2026-02-28 00:24:46.466433 | orchestrator | 2026-02-28 00:24:46.466445 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-28 00:24:46.466480 | orchestrator | Saturday 28 February 2026 00:24:45 +0000 (0:00:00.060) 0:01:50.943 ***** 2026-02-28 00:24:46.466493 | orchestrator | changed: [testbed-manager] 2026-02-28 00:24:46.466505 | orchestrator | 2026-02-28 00:24:46.466518 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:24:46.466531 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:24:46.466543 | orchestrator | 2026-02-28 00:24:46.466556 | orchestrator | 2026-02-28 00:24:46.466568 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:24:46.466579 | orchestrator | Saturday 28 February 2026 00:24:46 +0000 (0:00:00.577) 0:01:51.520 ***** 2026-02-28 00:24:46.466589 | orchestrator | =============================================================================== 2026-02-28 00:24:46.466600 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-28 00:24:46.466610 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.68s 2026-02-28 00:24:46.466621 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.69s 2026-02-28 00:24:46.466632 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.12s 2026-02-28 00:24:46.466642 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.03s 2026-02-28 00:24:46.466653 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.94s 2026-02-28 00:24:46.466663 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.81s 2026-02-28 00:24:46.466674 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-02-28 00:24:46.466684 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-02-28 00:24:46.466695 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-02-28 00:24:46.466705 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-28 00:24:46.758672 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-28 00:24:46.758764 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-02-28 00:24:46.763844 | orchestrator | + set -e 2026-02-28 00:24:46.763901 | orchestrator | + NAMESPACE=kolla 2026-02-28 00:24:46.763914 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-28 00:24:46.770002 | orchestrator | ++ semver latest 9.0.0 2026-02-28 00:24:46.826162 | orchestrator | + [[ -1 -lt 0 ]] 2026-02-28 00:24:46.826248 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-28 00:24:46.826420 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-28 00:24:58.907101 | orchestrator | 2026-02-28 00:24:58 | INFO  | Prepare task for execution of operator. 2026-02-28 00:24:58.979543 | orchestrator | 2026-02-28 00:24:58 | INFO  | Task 3491417b-72a7-457f-8491-b32c96a22479 (operator) was prepared for execution. 2026-02-28 00:24:58.979637 | orchestrator | 2026-02-28 00:24:58 | INFO  | It takes a moment until task 3491417b-72a7-457f-8491-b32c96a22479 (operator) has been started and output is visible here. 2026-02-28 00:25:14.999047 | orchestrator | 2026-02-28 00:25:14.999181 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-28 00:25:14.999199 | orchestrator | 2026-02-28 00:25:14.999211 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 00:25:14.999223 | orchestrator | Saturday 28 February 2026 00:25:03 +0000 (0:00:00.138) 0:00:00.138 ***** 2026-02-28 00:25:14.999234 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:14.999247 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:14.999258 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:14.999269 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:14.999280 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:14.999291 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:14.999306 | orchestrator | 2026-02-28 00:25:14.999317 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-28 00:25:14.999355 | orchestrator | Saturday 28 February 2026 00:25:06 +0000 (0:00:03.284) 0:00:03.423 ***** 2026-02-28 00:25:14.999366 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:14.999377 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:14.999388 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:14.999399 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:14.999409 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:14.999420 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:14.999431 | orchestrator | 2026-02-28 00:25:14.999442 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-28 00:25:14.999453 | orchestrator | 2026-02-28 00:25:14.999463 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-28 00:25:14.999475 | orchestrator | Saturday 28 February 2026 00:25:07 +0000 (0:00:00.807) 0:00:04.230 ***** 2026-02-28 00:25:14.999485 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:14.999496 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:14.999507 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:14.999518 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:14.999529 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:14.999539 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:14.999550 | orchestrator | 2026-02-28 00:25:14.999561 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-28 00:25:14.999572 | orchestrator | Saturday 28 February 2026 00:25:07 +0000 (0:00:00.163) 0:00:04.394 ***** 2026-02-28 00:25:14.999584 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:25:14.999596 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:25:14.999609 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:25:14.999621 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:25:14.999650 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:25:14.999663 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:25:14.999675 | orchestrator | 2026-02-28 00:25:14.999688 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-28 00:25:14.999701 | orchestrator | Saturday 28 February 2026 00:25:07 +0000 (0:00:00.148) 0:00:04.543 ***** 2026-02-28 00:25:14.999714 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:14.999727 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:14.999739 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:14.999751 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:14.999764 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:14.999776 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:14.999806 | orchestrator | 2026-02-28 00:25:14.999819 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-28 00:25:14.999832 | orchestrator | Saturday 28 February 2026 00:25:08 +0000 (0:00:00.640) 0:00:05.183 ***** 2026-02-28 00:25:14.999844 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:14.999856 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:14.999868 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:14.999879 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:14.999892 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:14.999904 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:14.999916 | orchestrator | 2026-02-28 00:25:14.999928 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-28 00:25:14.999941 | orchestrator | Saturday 28 February 2026 00:25:09 +0000 (0:00:00.840) 0:00:06.023 ***** 2026-02-28 00:25:14.999952 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-28 00:25:14.999963 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-28 00:25:14.999974 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-28 00:25:14.999985 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-28 00:25:14.999996 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-28 00:25:15.000006 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-28 00:25:15.000018 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-28 00:25:15.000029 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-28 00:25:15.000040 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-28 00:25:15.000059 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-28 00:25:15.000070 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-28 00:25:15.000081 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-28 00:25:15.000092 | orchestrator | 2026-02-28 00:25:15.000102 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-28 00:25:15.000113 | orchestrator | Saturday 28 February 2026 00:25:10 +0000 (0:00:01.145) 0:00:07.169 ***** 2026-02-28 00:25:15.000124 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:15.000135 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:15.000145 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:15.000156 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:15.000166 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:15.000177 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:15.000188 | orchestrator | 2026-02-28 00:25:15.000198 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-28 00:25:15.000210 | orchestrator | Saturday 28 February 2026 00:25:11 +0000 (0:00:01.265) 0:00:08.434 ***** 2026-02-28 00:25:15.000221 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:15.000232 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:15.000243 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:15.000253 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:15.000264 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:15.000294 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-28 00:25:15.000305 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:15.000316 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:15.000327 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:15.000337 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:15.000348 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:15.000358 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-28 00:25:15.000369 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:15.000379 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-28 00:25:15.000390 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-28 00:25:15.000400 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-28 00:25:15.000411 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:15.000421 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:15.000431 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:15.000442 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:15.000452 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-28 00:25:15.000463 | orchestrator | 2026-02-28 00:25:15.000473 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-28 00:25:15.000485 | orchestrator | Saturday 28 February 2026 00:25:12 +0000 (0:00:01.282) 0:00:09.717 ***** 2026-02-28 00:25:15.000496 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:15.000506 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:15.000517 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:15.000533 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:15.000544 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:15.000555 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:15.000565 | orchestrator | 2026-02-28 00:25:15.000576 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-28 00:25:15.000593 | orchestrator | Saturday 28 February 2026 00:25:12 +0000 (0:00:00.189) 0:00:09.906 ***** 2026-02-28 00:25:15.000604 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:15.000614 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:15.000625 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:15.000635 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:15.000646 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:15.000656 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:15.000667 | orchestrator | 2026-02-28 00:25:15.000678 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-28 00:25:15.000689 | orchestrator | Saturday 28 February 2026 00:25:13 +0000 (0:00:00.201) 0:00:10.108 ***** 2026-02-28 00:25:15.000699 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:15.000710 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:15.000720 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:15.000731 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:15.000741 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:15.000751 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:15.000762 | orchestrator | 2026-02-28 00:25:15.000772 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-28 00:25:15.000783 | orchestrator | Saturday 28 February 2026 00:25:13 +0000 (0:00:00.614) 0:00:10.723 ***** 2026-02-28 00:25:15.000810 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:15.000821 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:15.000845 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:15.000856 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:15.000867 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:15.000887 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:15.000898 | orchestrator | 2026-02-28 00:25:15.000909 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-28 00:25:15.000920 | orchestrator | Saturday 28 February 2026 00:25:13 +0000 (0:00:00.235) 0:00:10.958 ***** 2026-02-28 00:25:15.000930 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:25:15.000941 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-28 00:25:15.000952 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:15.000962 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:25:15.000973 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:25:15.000983 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:15.000994 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:25:15.001004 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:15.001015 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:15.001026 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:15.001036 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:25:15.001047 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:15.001057 | orchestrator | 2026-02-28 00:25:15.001068 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-28 00:25:15.001079 | orchestrator | Saturday 28 February 2026 00:25:14 +0000 (0:00:00.714) 0:00:11.673 ***** 2026-02-28 00:25:15.001090 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:15.001100 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:15.001111 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:15.001121 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:15.001132 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:15.001142 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:15.001153 | orchestrator | 2026-02-28 00:25:15.001164 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-28 00:25:15.001174 | orchestrator | Saturday 28 February 2026 00:25:14 +0000 (0:00:00.183) 0:00:11.857 ***** 2026-02-28 00:25:15.001185 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:15.001195 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:15.001206 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:15.001217 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:15.001241 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:16.351450 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:16.351551 | orchestrator | 2026-02-28 00:25:16.351566 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-28 00:25:16.351578 | orchestrator | Saturday 28 February 2026 00:25:15 +0000 (0:00:00.147) 0:00:12.005 ***** 2026-02-28 00:25:16.351589 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:16.351600 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:16.351610 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:16.351621 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:16.351631 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:16.351642 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:16.351652 | orchestrator | 2026-02-28 00:25:16.351663 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-28 00:25:16.351674 | orchestrator | Saturday 28 February 2026 00:25:15 +0000 (0:00:00.173) 0:00:12.178 ***** 2026-02-28 00:25:16.351684 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:25:16.351694 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:25:16.351705 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:25:16.351715 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:25:16.351726 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:25:16.351736 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:25:16.351746 | orchestrator | 2026-02-28 00:25:16.351757 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-28 00:25:16.351767 | orchestrator | Saturday 28 February 2026 00:25:15 +0000 (0:00:00.660) 0:00:12.838 ***** 2026-02-28 00:25:16.351778 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:25:16.351788 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:25:16.351892 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:25:16.351904 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:25:16.351914 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:25:16.351925 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:25:16.351935 | orchestrator | 2026-02-28 00:25:16.351946 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:25:16.351958 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:16.351992 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:16.352006 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:16.352020 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:16.352032 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:16.352043 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 00:25:16.352055 | orchestrator | 2026-02-28 00:25:16.352068 | orchestrator | 2026-02-28 00:25:16.352080 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:25:16.352093 | orchestrator | Saturday 28 February 2026 00:25:16 +0000 (0:00:00.250) 0:00:13.089 ***** 2026-02-28 00:25:16.352105 | orchestrator | =============================================================================== 2026-02-28 00:25:16.352117 | orchestrator | Gathering Facts --------------------------------------------------------- 3.28s 2026-02-28 00:25:16.352129 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2026-02-28 00:25:16.352141 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-02-28 00:25:16.352175 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2026-02-28 00:25:16.352185 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-02-28 00:25:16.352196 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2026-02-28 00:25:16.352206 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-02-28 00:25:16.352216 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-02-28 00:25:16.352227 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-02-28 00:25:16.352237 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-02-28 00:25:16.352248 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-28 00:25:16.352258 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.24s 2026-02-28 00:25:16.352269 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-28 00:25:16.352279 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-02-28 00:25:16.352290 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-02-28 00:25:16.352301 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-28 00:25:16.352311 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-02-28 00:25:16.352322 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-02-28 00:25:16.352332 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-02-28 00:25:16.675044 | orchestrator | + osism apply --environment custom facts 2026-02-28 00:25:18.711571 | orchestrator | 2026-02-28 00:25:18 | INFO  | Trying to run play facts in environment custom 2026-02-28 00:25:28.741130 | orchestrator | 2026-02-28 00:25:28 | INFO  | Prepare task for execution of facts. 2026-02-28 00:25:28.813310 | orchestrator | 2026-02-28 00:25:28 | INFO  | Task b4c4f7c6-25b7-452a-85b8-55f95a9cf484 (facts) was prepared for execution. 2026-02-28 00:25:28.813417 | orchestrator | 2026-02-28 00:25:28 | INFO  | It takes a moment until task b4c4f7c6-25b7-452a-85b8-55f95a9cf484 (facts) has been started and output is visible here. 2026-02-28 00:26:11.420314 | orchestrator | 2026-02-28 00:26:11.420432 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-28 00:26:11.420449 | orchestrator | 2026-02-28 00:26:11.420461 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:26:11.420473 | orchestrator | Saturday 28 February 2026 00:25:32 +0000 (0:00:00.069) 0:00:00.069 ***** 2026-02-28 00:26:11.420484 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:11.420496 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:11.420508 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:11.420519 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:11.420529 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:11.420540 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:11.420550 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:11.420561 | orchestrator | 2026-02-28 00:26:11.420572 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-28 00:26:11.420583 | orchestrator | Saturday 28 February 2026 00:25:34 +0000 (0:00:01.346) 0:00:01.415 ***** 2026-02-28 00:26:11.420594 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:11.420605 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:11.420615 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:11.420626 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:11.420638 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:11.420665 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:11.420676 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:11.420687 | orchestrator | 2026-02-28 00:26:11.420720 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-28 00:26:11.420732 | orchestrator | 2026-02-28 00:26:11.420742 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:26:11.420754 | orchestrator | Saturday 28 February 2026 00:25:35 +0000 (0:00:01.180) 0:00:02.596 ***** 2026-02-28 00:26:11.420764 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.420775 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.420786 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.420797 | orchestrator | 2026-02-28 00:26:11.420808 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:26:11.420820 | orchestrator | Saturday 28 February 2026 00:25:35 +0000 (0:00:00.104) 0:00:02.700 ***** 2026-02-28 00:26:11.420861 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.420876 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.420888 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.420901 | orchestrator | 2026-02-28 00:26:11.420913 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:26:11.420926 | orchestrator | Saturday 28 February 2026 00:25:35 +0000 (0:00:00.204) 0:00:02.905 ***** 2026-02-28 00:26:11.420939 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.420950 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.420961 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.420972 | orchestrator | 2026-02-28 00:26:11.420983 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:26:11.420993 | orchestrator | Saturday 28 February 2026 00:25:35 +0000 (0:00:00.226) 0:00:03.131 ***** 2026-02-28 00:26:11.421006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:26:11.421018 | orchestrator | 2026-02-28 00:26:11.421029 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:26:11.421040 | orchestrator | Saturday 28 February 2026 00:25:36 +0000 (0:00:00.141) 0:00:03.273 ***** 2026-02-28 00:26:11.421050 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.421061 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.421072 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.421082 | orchestrator | 2026-02-28 00:26:11.421093 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:26:11.421104 | orchestrator | Saturday 28 February 2026 00:25:36 +0000 (0:00:00.483) 0:00:03.756 ***** 2026-02-28 00:26:11.421115 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:11.421126 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:11.421136 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:11.421147 | orchestrator | 2026-02-28 00:26:11.421158 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:26:11.421169 | orchestrator | Saturday 28 February 2026 00:25:36 +0000 (0:00:00.156) 0:00:03.913 ***** 2026-02-28 00:26:11.421180 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:11.421190 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:11.421201 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:11.421212 | orchestrator | 2026-02-28 00:26:11.421223 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:26:11.421234 | orchestrator | Saturday 28 February 2026 00:25:37 +0000 (0:00:01.088) 0:00:05.001 ***** 2026-02-28 00:26:11.421244 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.421255 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.421266 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.421277 | orchestrator | 2026-02-28 00:26:11.421288 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:26:11.421299 | orchestrator | Saturday 28 February 2026 00:25:38 +0000 (0:00:00.454) 0:00:05.455 ***** 2026-02-28 00:26:11.421309 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:11.421320 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:11.421331 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:11.421342 | orchestrator | 2026-02-28 00:26:11.421360 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:26:11.421371 | orchestrator | Saturday 28 February 2026 00:25:39 +0000 (0:00:01.071) 0:00:06.527 ***** 2026-02-28 00:26:11.421382 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:11.421392 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:11.421403 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:11.421414 | orchestrator | 2026-02-28 00:26:11.421424 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-28 00:26:11.421435 | orchestrator | Saturday 28 February 2026 00:25:54 +0000 (0:00:15.298) 0:00:21.826 ***** 2026-02-28 00:26:11.421446 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:11.421456 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:11.421467 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:11.421478 | orchestrator | 2026-02-28 00:26:11.421488 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-28 00:26:11.421517 | orchestrator | Saturday 28 February 2026 00:25:54 +0000 (0:00:00.116) 0:00:21.943 ***** 2026-02-28 00:26:11.421529 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:11.421540 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:11.421550 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:11.421561 | orchestrator | 2026-02-28 00:26:11.421571 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-28 00:26:11.421582 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:07.714) 0:00:29.657 ***** 2026-02-28 00:26:11.421593 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.421604 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.421615 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.421626 | orchestrator | 2026-02-28 00:26:11.421636 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-28 00:26:11.421647 | orchestrator | Saturday 28 February 2026 00:26:02 +0000 (0:00:00.458) 0:00:30.116 ***** 2026-02-28 00:26:11.421658 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-28 00:26:11.421669 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-28 00:26:11.421680 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-28 00:26:11.421691 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:11.421702 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:11.421713 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-28 00:26:11.421724 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:11.421734 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:11.421745 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-28 00:26:11.421756 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:11.421767 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:11.421777 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-28 00:26:11.421788 | orchestrator | 2026-02-28 00:26:11.421799 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:26:11.421810 | orchestrator | Saturday 28 February 2026 00:26:06 +0000 (0:00:03.527) 0:00:33.644 ***** 2026-02-28 00:26:11.421820 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.421866 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.421884 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.421895 | orchestrator | 2026-02-28 00:26:11.421906 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:26:11.421917 | orchestrator | 2026-02-28 00:26:11.421928 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:11.421939 | orchestrator | Saturday 28 February 2026 00:26:07 +0000 (0:00:01.325) 0:00:34.969 ***** 2026-02-28 00:26:11.421950 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:11.421969 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:11.421980 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:11.421991 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:11.422002 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:11.422114 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:11.422131 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:11.422142 | orchestrator | 2026-02-28 00:26:11.422154 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:26:11.422165 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:11.422177 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:11.422189 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:11.422200 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:26:11.422211 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:11.422222 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:11.422232 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:26:11.422243 | orchestrator | 2026-02-28 00:26:11.422254 | orchestrator | 2026-02-28 00:26:11.422265 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:26:11.422276 | orchestrator | Saturday 28 February 2026 00:26:11 +0000 (0:00:03.603) 0:00:38.572 ***** 2026-02-28 00:26:11.422287 | orchestrator | =============================================================================== 2026-02-28 00:26:11.422298 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.30s 2026-02-28 00:26:11.422308 | orchestrator | Install required packages (Debian) -------------------------------------- 7.71s 2026-02-28 00:26:11.422319 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.60s 2026-02-28 00:26:11.422330 | orchestrator | Copy fact files --------------------------------------------------------- 3.53s 2026-02-28 00:26:11.422340 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-02-28 00:26:11.422351 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.33s 2026-02-28 00:26:11.422371 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-02-28 00:26:11.621611 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2026-02-28 00:26:11.621711 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-02-28 00:26:11.621726 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-02-28 00:26:11.621737 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-02-28 00:26:11.621748 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-02-28 00:26:11.621759 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-02-28 00:26:11.621769 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-02-28 00:26:11.621780 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-02-28 00:26:11.621791 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-02-28 00:26:11.621823 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-02-28 00:26:11.621882 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-02-28 00:26:11.910947 | orchestrator | + osism apply bootstrap 2026-02-28 00:26:23.898000 | orchestrator | 2026-02-28 00:26:23 | INFO  | Prepare task for execution of bootstrap. 2026-02-28 00:26:23.969440 | orchestrator | 2026-02-28 00:26:23 | INFO  | Task cf271c5c-be5c-4180-a0ee-4668bc1efc28 (bootstrap) was prepared for execution. 2026-02-28 00:26:23.969536 | orchestrator | 2026-02-28 00:26:23 | INFO  | It takes a moment until task cf271c5c-be5c-4180-a0ee-4668bc1efc28 (bootstrap) has been started and output is visible here. 2026-02-28 00:26:41.176803 | orchestrator | 2026-02-28 00:26:41.176964 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-28 00:26:41.176985 | orchestrator | 2026-02-28 00:26:41.176997 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-28 00:26:41.177009 | orchestrator | Saturday 28 February 2026 00:26:28 +0000 (0:00:00.105) 0:00:00.105 ***** 2026-02-28 00:26:41.177020 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:41.177033 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:41.177043 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:41.177054 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:41.177072 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:41.177086 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:41.177096 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:41.177107 | orchestrator | 2026-02-28 00:26:41.177118 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:26:41.177129 | orchestrator | 2026-02-28 00:26:41.177140 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:41.177151 | orchestrator | Saturday 28 February 2026 00:26:28 +0000 (0:00:00.178) 0:00:00.283 ***** 2026-02-28 00:26:41.177161 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:41.177173 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:41.177192 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:41.177210 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:41.177229 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:41.177246 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:41.177262 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:41.177279 | orchestrator | 2026-02-28 00:26:41.177296 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-28 00:26:41.177315 | orchestrator | 2026-02-28 00:26:41.177334 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:26:41.177352 | orchestrator | Saturday 28 February 2026 00:26:32 +0000 (0:00:04.434) 0:00:04.718 ***** 2026-02-28 00:26:41.177366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:26:41.177380 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-28 00:26:41.177392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:26:41.177405 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-28 00:26:41.177417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-28 00:26:41.177429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:26:41.177442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-28 00:26:41.177454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-28 00:26:41.177487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-28 00:26:41.177499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-28 00:26:41.177512 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-28 00:26:41.177525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:26:41.177537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 00:26:41.177549 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-28 00:26:41.177562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 00:26:41.177595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:26:41.177635 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-28 00:26:41.177646 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 00:26:41.177657 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-28 00:26:41.177668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:26:41.177690 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:41.177701 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:41.177712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-28 00:26:41.177722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 00:26:41.177733 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-28 00:26:41.177744 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-28 00:26:41.177754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-28 00:26:41.177765 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 00:26:41.177776 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-28 00:26:41.177786 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-28 00:26:41.177796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 00:26:41.177807 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:41.177818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-28 00:26:41.177828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-28 00:26:41.177839 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 00:26:41.177882 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-28 00:26:41.177894 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-28 00:26:41.177905 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-28 00:26:41.177922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-28 00:26:41.177937 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 00:26:41.177947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:26:41.177959 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-28 00:26:41.177970 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:41.177981 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 00:26:41.177991 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-28 00:26:41.178002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:26:41.178100 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 00:26:41.178130 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:41.178152 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-28 00:26:41.178163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:26:41.178174 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:41.178185 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-28 00:26:41.178195 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 00:26:41.178206 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 00:26:41.178216 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 00:26:41.178227 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:41.178238 | orchestrator | 2026-02-28 00:26:41.178248 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-28 00:26:41.178259 | orchestrator | 2026-02-28 00:26:41.178270 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-28 00:26:41.178281 | orchestrator | Saturday 28 February 2026 00:26:33 +0000 (0:00:00.525) 0:00:05.243 ***** 2026-02-28 00:26:41.178291 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:41.178302 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:41.178323 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:41.178334 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:41.178344 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:41.178365 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:41.178376 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:41.178387 | orchestrator | 2026-02-28 00:26:41.178398 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-28 00:26:41.178408 | orchestrator | Saturday 28 February 2026 00:26:34 +0000 (0:00:01.218) 0:00:06.462 ***** 2026-02-28 00:26:41.178419 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:41.178430 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:41.178441 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:41.178451 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:41.178462 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:41.178472 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:41.178483 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:41.178494 | orchestrator | 2026-02-28 00:26:41.178504 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-28 00:26:41.178515 | orchestrator | Saturday 28 February 2026 00:26:35 +0000 (0:00:01.409) 0:00:07.871 ***** 2026-02-28 00:26:41.178526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:41.178540 | orchestrator | 2026-02-28 00:26:41.178551 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-28 00:26:41.178562 | orchestrator | Saturday 28 February 2026 00:26:36 +0000 (0:00:00.295) 0:00:08.167 ***** 2026-02-28 00:26:41.178573 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:41.178583 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:41.178594 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:41.178604 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:41.178615 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:41.178625 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:41.178636 | orchestrator | changed: [testbed-manager] 2026-02-28 00:26:41.178647 | orchestrator | 2026-02-28 00:26:41.178657 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-28 00:26:41.178668 | orchestrator | Saturday 28 February 2026 00:26:38 +0000 (0:00:02.281) 0:00:10.449 ***** 2026-02-28 00:26:41.178679 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:41.178691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:41.178703 | orchestrator | 2026-02-28 00:26:41.178714 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-28 00:26:41.178725 | orchestrator | Saturday 28 February 2026 00:26:38 +0000 (0:00:00.342) 0:00:10.792 ***** 2026-02-28 00:26:41.178752 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:41.178763 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:41.178774 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:41.178784 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:41.178795 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:41.178822 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:41.178834 | orchestrator | 2026-02-28 00:26:41.178844 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-28 00:26:41.178873 | orchestrator | Saturday 28 February 2026 00:26:39 +0000 (0:00:01.105) 0:00:11.898 ***** 2026-02-28 00:26:41.178884 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:41.178895 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:41.178906 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:41.178916 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:41.178927 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:41.178937 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:41.178955 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:41.178966 | orchestrator | 2026-02-28 00:26:41.178976 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-28 00:26:41.178993 | orchestrator | Saturday 28 February 2026 00:26:40 +0000 (0:00:00.548) 0:00:12.446 ***** 2026-02-28 00:26:41.179003 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:41.179014 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:41.179025 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:41.179035 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:41.179045 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:41.179056 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:41.179067 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:41.179078 | orchestrator | 2026-02-28 00:26:41.179089 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-28 00:26:41.179100 | orchestrator | Saturday 28 February 2026 00:26:41 +0000 (0:00:00.510) 0:00:12.957 ***** 2026-02-28 00:26:41.179111 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:41.179122 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:41.179141 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:53.276122 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:53.276232 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:53.276247 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:53.276260 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:53.276271 | orchestrator | 2026-02-28 00:26:53.276283 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-28 00:26:53.276296 | orchestrator | Saturday 28 February 2026 00:26:41 +0000 (0:00:00.240) 0:00:13.197 ***** 2026-02-28 00:26:53.276310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:53.276339 | orchestrator | 2026-02-28 00:26:53.276351 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-28 00:26:53.276362 | orchestrator | Saturday 28 February 2026 00:26:41 +0000 (0:00:00.291) 0:00:13.489 ***** 2026-02-28 00:26:53.276374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:53.276385 | orchestrator | 2026-02-28 00:26:53.276396 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-28 00:26:53.276407 | orchestrator | Saturday 28 February 2026 00:26:42 +0000 (0:00:00.447) 0:00:13.936 ***** 2026-02-28 00:26:53.276417 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.276429 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.276440 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.276451 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.276462 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.276472 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.276483 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.276494 | orchestrator | 2026-02-28 00:26:53.276505 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-28 00:26:53.276516 | orchestrator | Saturday 28 February 2026 00:26:43 +0000 (0:00:01.267) 0:00:15.203 ***** 2026-02-28 00:26:53.276527 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:53.276538 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:53.276549 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:53.276560 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:53.276571 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:53.276582 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:53.276592 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:53.276603 | orchestrator | 2026-02-28 00:26:53.276614 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-28 00:26:53.276651 | orchestrator | Saturday 28 February 2026 00:26:43 +0000 (0:00:00.216) 0:00:15.420 ***** 2026-02-28 00:26:53.276665 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.276677 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.276691 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.276703 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.276715 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.276727 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.276739 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.276752 | orchestrator | 2026-02-28 00:26:53.276765 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-28 00:26:53.276777 | orchestrator | Saturday 28 February 2026 00:26:44 +0000 (0:00:00.549) 0:00:15.969 ***** 2026-02-28 00:26:53.276789 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:53.276802 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:53.276815 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:53.276835 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:53.276856 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:53.276903 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:53.276926 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:53.276948 | orchestrator | 2026-02-28 00:26:53.276969 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-28 00:26:53.276991 | orchestrator | Saturday 28 February 2026 00:26:44 +0000 (0:00:00.265) 0:00:16.235 ***** 2026-02-28 00:26:53.277004 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:53.277015 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:53.277025 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277036 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:53.277047 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:53.277057 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:53.277068 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:53.277079 | orchestrator | 2026-02-28 00:26:53.277090 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-28 00:26:53.277100 | orchestrator | Saturday 28 February 2026 00:26:44 +0000 (0:00:00.554) 0:00:16.790 ***** 2026-02-28 00:26:53.277111 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277122 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:53.277132 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:53.277142 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:53.277153 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:53.277163 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:53.277174 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:53.277184 | orchestrator | 2026-02-28 00:26:53.277204 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-28 00:26:53.277215 | orchestrator | Saturday 28 February 2026 00:26:46 +0000 (0:00:01.167) 0:00:17.958 ***** 2026-02-28 00:26:53.277226 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.277237 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.277247 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.277258 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.277269 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.277280 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277290 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.277301 | orchestrator | 2026-02-28 00:26:53.277311 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-28 00:26:53.277322 | orchestrator | Saturday 28 February 2026 00:26:47 +0000 (0:00:01.026) 0:00:18.984 ***** 2026-02-28 00:26:53.277352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:53.277364 | orchestrator | 2026-02-28 00:26:53.277375 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-28 00:26:53.277386 | orchestrator | Saturday 28 February 2026 00:26:47 +0000 (0:00:00.342) 0:00:19.327 ***** 2026-02-28 00:26:53.277406 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:53.277417 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:53.277427 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:26:53.277438 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:26:53.277448 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:26:53.277459 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:53.277469 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:53.277479 | orchestrator | 2026-02-28 00:26:53.277490 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-28 00:26:53.277501 | orchestrator | Saturday 28 February 2026 00:26:48 +0000 (0:00:01.309) 0:00:20.636 ***** 2026-02-28 00:26:53.277511 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.277522 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.277532 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.277542 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277553 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.277563 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.277574 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.277584 | orchestrator | 2026-02-28 00:26:53.277595 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-28 00:26:53.277605 | orchestrator | Saturday 28 February 2026 00:26:48 +0000 (0:00:00.242) 0:00:20.878 ***** 2026-02-28 00:26:53.277616 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.277626 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.277637 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.277647 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277658 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.277668 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.277678 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.277689 | orchestrator | 2026-02-28 00:26:53.277699 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-28 00:26:53.277710 | orchestrator | Saturday 28 February 2026 00:26:49 +0000 (0:00:00.253) 0:00:21.132 ***** 2026-02-28 00:26:53.277721 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.277731 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.277741 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.277752 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277762 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.277772 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.277783 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.277793 | orchestrator | 2026-02-28 00:26:53.277804 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-28 00:26:53.277814 | orchestrator | Saturday 28 February 2026 00:26:49 +0000 (0:00:00.215) 0:00:21.347 ***** 2026-02-28 00:26:53.277826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:26:53.277838 | orchestrator | 2026-02-28 00:26:53.277849 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-28 00:26:53.277859 | orchestrator | Saturday 28 February 2026 00:26:49 +0000 (0:00:00.288) 0:00:21.635 ***** 2026-02-28 00:26:53.277912 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.277931 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.277949 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.277969 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.277988 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.278006 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.278092 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.278105 | orchestrator | 2026-02-28 00:26:53.278116 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-28 00:26:53.278126 | orchestrator | Saturday 28 February 2026 00:26:50 +0000 (0:00:00.557) 0:00:22.193 ***** 2026-02-28 00:26:53.278137 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:26:53.278148 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:26:53.278165 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:26:53.278176 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:26:53.278187 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:26:53.278197 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:26:53.278207 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:26:53.278218 | orchestrator | 2026-02-28 00:26:53.278229 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-28 00:26:53.278239 | orchestrator | Saturday 28 February 2026 00:26:50 +0000 (0:00:00.250) 0:00:22.443 ***** 2026-02-28 00:26:53.278250 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.278260 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.278271 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.278281 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.278292 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:26:53.278302 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:26:53.278312 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:26:53.278323 | orchestrator | 2026-02-28 00:26:53.278333 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-28 00:26:53.278344 | orchestrator | Saturday 28 February 2026 00:26:51 +0000 (0:00:01.094) 0:00:23.538 ***** 2026-02-28 00:26:53.278355 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.278365 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.278376 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.278386 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.278397 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:26:53.278407 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:26:53.278417 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:26:53.278428 | orchestrator | 2026-02-28 00:26:53.278439 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-28 00:26:53.278449 | orchestrator | Saturday 28 February 2026 00:26:52 +0000 (0:00:00.549) 0:00:24.088 ***** 2026-02-28 00:26:53.278460 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:26:53.278470 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:26:53.278481 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:26:53.278491 | orchestrator | ok: [testbed-manager] 2026-02-28 00:26:53.278512 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:35.316749 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:35.316863 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:35.316880 | orchestrator | 2026-02-28 00:27:35.316893 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-28 00:27:35.316974 | orchestrator | Saturday 28 February 2026 00:26:53 +0000 (0:00:01.165) 0:00:25.253 ***** 2026-02-28 00:27:35.316994 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.317015 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.317033 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.317053 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:35.317065 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:35.317076 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:35.317086 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:35.317097 | orchestrator | 2026-02-28 00:27:35.317108 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-28 00:27:35.317120 | orchestrator | Saturday 28 February 2026 00:27:09 +0000 (0:00:16.023) 0:00:41.277 ***** 2026-02-28 00:27:35.317131 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.317142 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.317153 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.317164 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.317175 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.317186 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.317196 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.317207 | orchestrator | 2026-02-28 00:27:35.317218 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-28 00:27:35.317228 | orchestrator | Saturday 28 February 2026 00:27:09 +0000 (0:00:00.220) 0:00:41.498 ***** 2026-02-28 00:27:35.317239 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.317276 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.317290 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.317302 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.317314 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.317326 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.317337 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.317349 | orchestrator | 2026-02-28 00:27:35.317362 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-28 00:27:35.317374 | orchestrator | Saturday 28 February 2026 00:27:09 +0000 (0:00:00.253) 0:00:41.752 ***** 2026-02-28 00:27:35.317386 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.317398 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.317409 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.317422 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.317433 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.317445 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.317457 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.317469 | orchestrator | 2026-02-28 00:27:35.317481 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-28 00:27:35.317494 | orchestrator | Saturday 28 February 2026 00:27:10 +0000 (0:00:00.236) 0:00:41.988 ***** 2026-02-28 00:27:35.317507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:35.317522 | orchestrator | 2026-02-28 00:27:35.317534 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-28 00:27:35.317547 | orchestrator | Saturday 28 February 2026 00:27:10 +0000 (0:00:00.293) 0:00:42.281 ***** 2026-02-28 00:27:35.317559 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.317572 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.317583 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.317595 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.317625 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.317638 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.317648 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.317659 | orchestrator | 2026-02-28 00:27:35.317669 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-28 00:27:35.317680 | orchestrator | Saturday 28 February 2026 00:27:12 +0000 (0:00:01.673) 0:00:43.955 ***** 2026-02-28 00:27:35.317691 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:35.317701 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:35.317712 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:35.317722 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:35.317733 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:35.317743 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:35.317754 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:35.317764 | orchestrator | 2026-02-28 00:27:35.317775 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-28 00:27:35.317786 | orchestrator | Saturday 28 February 2026 00:27:13 +0000 (0:00:01.116) 0:00:45.072 ***** 2026-02-28 00:27:35.317796 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.317807 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.317817 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.317828 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.317838 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.317849 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.317859 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.317869 | orchestrator | 2026-02-28 00:27:35.317880 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-28 00:27:35.317891 | orchestrator | Saturday 28 February 2026 00:27:14 +0000 (0:00:00.901) 0:00:45.973 ***** 2026-02-28 00:27:35.317941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:35.317976 | orchestrator | 2026-02-28 00:27:35.317994 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-28 00:27:35.318012 | orchestrator | Saturday 28 February 2026 00:27:14 +0000 (0:00:00.313) 0:00:46.286 ***** 2026-02-28 00:27:35.318088 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:35.318099 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:35.318109 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:35.318120 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:35.318131 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:35.318141 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:35.318152 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:35.318162 | orchestrator | 2026-02-28 00:27:35.318193 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-28 00:27:35.318205 | orchestrator | Saturday 28 February 2026 00:27:15 +0000 (0:00:01.028) 0:00:47.315 ***** 2026-02-28 00:27:35.318216 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:27:35.318226 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:27:35.318237 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:27:35.318247 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:27:35.318258 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:27:35.318283 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:27:35.318294 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:27:35.318305 | orchestrator | 2026-02-28 00:27:35.318316 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-28 00:27:35.318327 | orchestrator | Saturday 28 February 2026 00:27:15 +0000 (0:00:00.292) 0:00:47.608 ***** 2026-02-28 00:27:35.318338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:35.318350 | orchestrator | 2026-02-28 00:27:35.318361 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-28 00:27:35.318371 | orchestrator | Saturday 28 February 2026 00:27:16 +0000 (0:00:00.370) 0:00:47.978 ***** 2026-02-28 00:27:35.318382 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.318393 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.318404 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.318414 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.318425 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.318436 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.318446 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.318457 | orchestrator | 2026-02-28 00:27:35.318468 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-28 00:27:35.318479 | orchestrator | Saturday 28 February 2026 00:27:17 +0000 (0:00:01.873) 0:00:49.852 ***** 2026-02-28 00:27:35.318489 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:35.318500 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:35.318511 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:35.318522 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:35.318532 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:35.318543 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:35.318554 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:35.318565 | orchestrator | 2026-02-28 00:27:35.318575 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-28 00:27:35.318586 | orchestrator | Saturday 28 February 2026 00:27:19 +0000 (0:00:01.230) 0:00:51.082 ***** 2026-02-28 00:27:35.318597 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:27:35.318607 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:27:35.318618 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:27:35.318629 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:27:35.318639 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:27:35.318650 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:27:35.318671 | orchestrator | changed: [testbed-manager] 2026-02-28 00:27:35.318682 | orchestrator | 2026-02-28 00:27:35.318692 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-28 00:27:35.318703 | orchestrator | Saturday 28 February 2026 00:27:32 +0000 (0:00:12.994) 0:01:04.077 ***** 2026-02-28 00:27:35.318714 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.318725 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.318736 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.318746 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.318757 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.318768 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.318778 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.318789 | orchestrator | 2026-02-28 00:27:35.318800 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-28 00:27:35.318811 | orchestrator | Saturday 28 February 2026 00:27:33 +0000 (0:00:01.493) 0:01:05.570 ***** 2026-02-28 00:27:35.318821 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.318832 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.318843 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.318853 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.318864 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.318875 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.318885 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.318896 | orchestrator | 2026-02-28 00:27:35.318955 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-28 00:27:35.318966 | orchestrator | Saturday 28 February 2026 00:27:34 +0000 (0:00:00.887) 0:01:06.457 ***** 2026-02-28 00:27:35.318977 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.318987 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.318998 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.319009 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.319019 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.319030 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.319040 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.319051 | orchestrator | 2026-02-28 00:27:35.319062 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-28 00:27:35.319073 | orchestrator | Saturday 28 February 2026 00:27:34 +0000 (0:00:00.220) 0:01:06.678 ***** 2026-02-28 00:27:35.319083 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:27:35.319094 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:27:35.319104 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:27:35.319121 | orchestrator | ok: [testbed-manager] 2026-02-28 00:27:35.319132 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:27:35.319142 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:27:35.319153 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:27:35.319163 | orchestrator | 2026-02-28 00:27:35.319174 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-28 00:27:35.319185 | orchestrator | Saturday 28 February 2026 00:27:34 +0000 (0:00:00.217) 0:01:06.895 ***** 2026-02-28 00:27:35.319196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:27:35.319207 | orchestrator | 2026-02-28 00:27:35.319226 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-28 00:30:00.356074 | orchestrator | Saturday 28 February 2026 00:27:35 +0000 (0:00:00.319) 0:01:07.215 ***** 2026-02-28 00:30:00.356167 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356178 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356185 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356192 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:00.356198 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356204 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356211 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356217 | orchestrator | 2026-02-28 00:30:00.356224 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-28 00:30:00.356248 | orchestrator | Saturday 28 February 2026 00:27:36 +0000 (0:00:01.581) 0:01:08.797 ***** 2026-02-28 00:30:00.356254 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:00.356261 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:00.356279 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:00.356293 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:00.356299 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:00.356305 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:00.356311 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:00.356317 | orchestrator | 2026-02-28 00:30:00.356324 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-28 00:30:00.356331 | orchestrator | Saturday 28 February 2026 00:27:37 +0000 (0:00:00.542) 0:01:09.339 ***** 2026-02-28 00:30:00.356338 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356344 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356350 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356356 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:00.356362 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356368 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356374 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356380 | orchestrator | 2026-02-28 00:30:00.356387 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-28 00:30:00.356393 | orchestrator | Saturday 28 February 2026 00:27:37 +0000 (0:00:00.245) 0:01:09.584 ***** 2026-02-28 00:30:00.356399 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:00.356405 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356411 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356417 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356423 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356429 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356435 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356441 | orchestrator | 2026-02-28 00:30:00.356447 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-28 00:30:00.356453 | orchestrator | Saturday 28 February 2026 00:27:38 +0000 (0:00:01.176) 0:01:10.760 ***** 2026-02-28 00:30:00.356459 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:00.356466 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:00.356472 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:00.356478 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:00.356484 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:00.356490 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:00.356496 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:00.356502 | orchestrator | 2026-02-28 00:30:00.356508 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-28 00:30:00.356514 | orchestrator | Saturday 28 February 2026 00:27:40 +0000 (0:00:01.646) 0:01:12.407 ***** 2026-02-28 00:30:00.356520 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356527 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356533 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:00.356539 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356545 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356551 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356557 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356563 | orchestrator | 2026-02-28 00:30:00.356569 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-28 00:30:00.356575 | orchestrator | Saturday 28 February 2026 00:27:42 +0000 (0:00:02.337) 0:01:14.744 ***** 2026-02-28 00:30:00.356581 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:00.356587 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356593 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356599 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356605 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356611 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356617 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356623 | orchestrator | 2026-02-28 00:30:00.356629 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-28 00:30:00.356641 | orchestrator | Saturday 28 February 2026 00:28:17 +0000 (0:00:34.423) 0:01:49.168 ***** 2026-02-28 00:30:00.356647 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:00.356653 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:00.356659 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:00.356665 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:00.356671 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:00.356677 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:00.356683 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:00.356689 | orchestrator | 2026-02-28 00:30:00.356696 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-28 00:30:00.356702 | orchestrator | Saturday 28 February 2026 00:29:44 +0000 (0:01:27.271) 0:03:16.440 ***** 2026-02-28 00:30:00.356708 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356714 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:00.356720 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356726 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356732 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356738 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356744 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356750 | orchestrator | 2026-02-28 00:30:00.356757 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-28 00:30:00.356763 | orchestrator | Saturday 28 February 2026 00:29:46 +0000 (0:00:01.755) 0:03:18.195 ***** 2026-02-28 00:30:00.356769 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:00.356775 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:00.356781 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:00.356787 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:00.356793 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:00.356799 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:00.356805 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:00.356811 | orchestrator | 2026-02-28 00:30:00.356818 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-28 00:30:00.356824 | orchestrator | Saturday 28 February 2026 00:29:59 +0000 (0:00:12.815) 0:03:31.010 ***** 2026-02-28 00:30:00.356854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-28 00:30:00.356870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-28 00:30:00.356879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-28 00:30:00.356886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-28 00:30:00.356897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-28 00:30:00.356904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-28 00:30:00.356913 | orchestrator | 2026-02-28 00:30:00.356920 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-28 00:30:00.356926 | orchestrator | Saturday 28 February 2026 00:29:59 +0000 (0:00:00.396) 0:03:31.407 ***** 2026-02-28 00:30:00.356933 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:00.356939 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:00.356945 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:00.356951 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:00.356957 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:00.356964 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:00.356970 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-28 00:30:00.356976 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:00.356982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:00.357015 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:00.357022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:30:00.357028 | orchestrator | 2026-02-28 00:30:00.357035 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-28 00:30:00.357044 | orchestrator | Saturday 28 February 2026 00:30:00 +0000 (0:00:00.750) 0:03:32.158 ***** 2026-02-28 00:30:00.357050 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:00.357058 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:00.357064 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:00.357070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:00.357076 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:00.357087 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:05.979283 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:05.979391 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:05.979406 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:05.979419 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:05.979430 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:05.979441 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:05.979452 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:05.979512 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:05.979524 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:05.979535 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:05.979546 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:05.979557 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:05.979567 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:05.979578 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:05.979589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:05.979599 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:05.979610 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:05.979620 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:05.979631 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:05.979642 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:05.979654 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:05.979664 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:05.979675 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:05.979685 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:05.979696 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:05.979706 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-28 00:30:05.979717 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:05.979727 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-28 00:30:05.979738 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-28 00:30:05.979748 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-28 00:30:05.979759 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-28 00:30:05.979769 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-28 00:30:05.979780 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-28 00:30:05.979790 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-28 00:30:05.979801 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-28 00:30:05.979829 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-28 00:30:05.979842 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:05.979855 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:05.979867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:05.979879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:05.979891 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-28 00:30:05.979912 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:05.979925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:05.979954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-28 00:30:05.979967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:05.979980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:05.980014 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-28 00:30:05.980028 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:05.980041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:05.980053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-28 00:30:05.980066 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:05.980078 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:05.980090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-28 00:30:05.980102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:05.980115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:05.980128 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-28 00:30:05.980141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:05.980153 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:05.980166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-28 00:30:05.980176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:05.980187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:05.980197 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-28 00:30:05.980208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:05.980218 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:05.980229 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:05.980239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:05.980250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-28 00:30:05.980261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-28 00:30:05.980271 | orchestrator | 2026-02-28 00:30:05.980282 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-28 00:30:05.980293 | orchestrator | Saturday 28 February 2026 00:30:04 +0000 (0:00:04.695) 0:03:36.854 ***** 2026-02-28 00:30:05.980304 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980314 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980335 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980353 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980364 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980374 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-28 00:30:05.980385 | orchestrator | 2026-02-28 00:30:05.980396 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-28 00:30:05.980406 | orchestrator | Saturday 28 February 2026 00:30:05 +0000 (0:00:00.569) 0:03:37.424 ***** 2026-02-28 00:30:05.980417 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:05.980433 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:05.980444 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:05.980455 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:05.980466 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:05.980476 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:05.980487 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:05.980498 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:05.980508 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:05.980519 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:05.980544 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:18.979946 | orchestrator | 2026-02-28 00:30:18.980131 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-28 00:30:18.980153 | orchestrator | Saturday 28 February 2026 00:30:05 +0000 (0:00:00.482) 0:03:37.906 ***** 2026-02-28 00:30:18.980163 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:18.980175 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:18.980185 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:18.980195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:18.980205 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:18.980215 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:18.980224 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-28 00:30:18.980234 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:18.980243 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:18.980258 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:18.980274 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-28 00:30:18.980290 | orchestrator | 2026-02-28 00:30:18.980306 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-28 00:30:18.980323 | orchestrator | Saturday 28 February 2026 00:30:06 +0000 (0:00:00.606) 0:03:38.513 ***** 2026-02-28 00:30:18.980339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:18.980357 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:18.980374 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:18.980389 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:18.980399 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:18.980431 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:18.980441 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-28 00:30:18.980451 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:18.980460 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:18.980470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:18.980480 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-28 00:30:18.980489 | orchestrator | 2026-02-28 00:30:18.980499 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-28 00:30:18.980508 | orchestrator | Saturday 28 February 2026 00:30:07 +0000 (0:00:00.519) 0:03:39.032 ***** 2026-02-28 00:30:18.980518 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:18.980527 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:18.980537 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:18.980547 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:18.980556 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:18.980566 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:18.980575 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:18.980585 | orchestrator | 2026-02-28 00:30:18.980594 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-28 00:30:18.980604 | orchestrator | Saturday 28 February 2026 00:30:07 +0000 (0:00:00.312) 0:03:39.344 ***** 2026-02-28 00:30:18.980613 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:18.980624 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:18.980633 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:18.980642 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:18.980651 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:18.980661 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:18.980670 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:18.980679 | orchestrator | 2026-02-28 00:30:18.980689 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-28 00:30:18.980698 | orchestrator | Saturday 28 February 2026 00:30:13 +0000 (0:00:05.790) 0:03:45.134 ***** 2026-02-28 00:30:18.980708 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-28 00:30:18.980717 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-28 00:30:18.980727 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:18.980736 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-28 00:30:18.980746 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:18.980755 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:18.980771 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-28 00:30:18.980787 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:18.980803 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-28 00:30:18.980819 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-28 00:30:18.980834 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:18.980849 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:18.980865 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-28 00:30:18.980881 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:18.980898 | orchestrator | 2026-02-28 00:30:18.980913 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-28 00:30:18.980930 | orchestrator | Saturday 28 February 2026 00:30:13 +0000 (0:00:00.436) 0:03:45.571 ***** 2026-02-28 00:30:18.980945 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-28 00:30:18.980961 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-28 00:30:18.980977 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-28 00:30:18.981044 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-28 00:30:18.981065 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-28 00:30:18.981081 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-28 00:30:18.981113 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-28 00:30:18.981129 | orchestrator | 2026-02-28 00:30:18.981142 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-28 00:30:18.981151 | orchestrator | Saturday 28 February 2026 00:30:14 +0000 (0:00:00.988) 0:03:46.559 ***** 2026-02-28 00:30:18.981163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:30:18.981176 | orchestrator | 2026-02-28 00:30:18.981186 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-28 00:30:18.981195 | orchestrator | Saturday 28 February 2026 00:30:15 +0000 (0:00:00.401) 0:03:46.960 ***** 2026-02-28 00:30:18.981205 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:18.981214 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:18.981224 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:18.981233 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:18.981243 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:18.981257 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:18.981273 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:18.981289 | orchestrator | 2026-02-28 00:30:18.981306 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-28 00:30:18.981323 | orchestrator | Saturday 28 February 2026 00:30:16 +0000 (0:00:01.376) 0:03:48.337 ***** 2026-02-28 00:30:18.981339 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:18.981350 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:18.981359 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:18.981368 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:18.981377 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:18.981386 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:18.981396 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:18.981405 | orchestrator | 2026-02-28 00:30:18.981415 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-28 00:30:18.981424 | orchestrator | Saturday 28 February 2026 00:30:17 +0000 (0:00:00.682) 0:03:49.019 ***** 2026-02-28 00:30:18.981434 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:18.981462 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:18.981472 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:18.981481 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:18.981491 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:18.981500 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:18.981509 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:18.981519 | orchestrator | 2026-02-28 00:30:18.981528 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-28 00:30:18.981538 | orchestrator | Saturday 28 February 2026 00:30:17 +0000 (0:00:00.660) 0:03:49.680 ***** 2026-02-28 00:30:18.981547 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:18.981556 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:18.981566 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:18.981575 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:18.981584 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:18.981594 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:18.981603 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:18.981612 | orchestrator | 2026-02-28 00:30:18.981622 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-28 00:30:18.981631 | orchestrator | Saturday 28 February 2026 00:30:18 +0000 (0:00:00.651) 0:03:50.332 ***** 2026-02-28 00:30:18.981645 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237122.3505754, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:18.981671 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237136.0453043, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:18.981682 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237156.663666, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:18.981704 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237167.4719112, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329508 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237129.216, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329649 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237141.4480793, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329668 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772237172.479282, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329681 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329714 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329741 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329754 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329794 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329807 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329818 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 00:30:24.329830 | orchestrator | 2026-02-28 00:30:24.329843 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-28 00:30:24.329856 | orchestrator | Saturday 28 February 2026 00:30:19 +0000 (0:00:01.047) 0:03:51.379 ***** 2026-02-28 00:30:24.329866 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:24.329878 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:24.329889 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:24.329907 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:24.329918 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:24.329929 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:24.330136 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:24.330148 | orchestrator | 2026-02-28 00:30:24.330160 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-28 00:30:24.330170 | orchestrator | Saturday 28 February 2026 00:30:20 +0000 (0:00:01.149) 0:03:52.529 ***** 2026-02-28 00:30:24.330181 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:24.330191 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:24.330202 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:24.330212 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:24.330222 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:24.330233 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:24.330243 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:24.330254 | orchestrator | 2026-02-28 00:30:24.330265 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-28 00:30:24.330275 | orchestrator | Saturday 28 February 2026 00:30:21 +0000 (0:00:01.138) 0:03:53.667 ***** 2026-02-28 00:30:24.330286 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:30:24.330296 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:30:24.330307 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:30:24.330317 | orchestrator | changed: [testbed-manager] 2026-02-28 00:30:24.330327 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:30:24.330338 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:30:24.330348 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:30:24.330359 | orchestrator | 2026-02-28 00:30:24.330369 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-28 00:30:24.330387 | orchestrator | Saturday 28 February 2026 00:30:22 +0000 (0:00:01.121) 0:03:54.788 ***** 2026-02-28 00:30:24.330399 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:30:24.330410 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:30:24.330420 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:30:24.330431 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:30:24.330442 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:30:24.330452 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:30:24.330463 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:30:24.330473 | orchestrator | 2026-02-28 00:30:24.330484 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-28 00:30:24.330494 | orchestrator | Saturday 28 February 2026 00:30:23 +0000 (0:00:00.301) 0:03:55.090 ***** 2026-02-28 00:30:24.330505 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:30:24.330517 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:30:24.330527 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:30:24.330538 | orchestrator | ok: [testbed-manager] 2026-02-28 00:30:24.330548 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:30:24.330559 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:30:24.330569 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:30:24.330579 | orchestrator | 2026-02-28 00:30:24.330590 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-28 00:30:24.330601 | orchestrator | Saturday 28 February 2026 00:30:23 +0000 (0:00:00.704) 0:03:55.795 ***** 2026-02-28 00:30:24.330614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:30:24.330626 | orchestrator | 2026-02-28 00:30:24.330637 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-28 00:30:24.330659 | orchestrator | Saturday 28 February 2026 00:30:24 +0000 (0:00:00.434) 0:03:56.229 ***** 2026-02-28 00:31:42.095848 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.095999 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:42.096030 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:42.096081 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:42.096133 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:42.096155 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:42.096175 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:42.096196 | orchestrator | 2026-02-28 00:31:42.096218 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-28 00:31:42.096239 | orchestrator | Saturday 28 February 2026 00:30:32 +0000 (0:00:08.177) 0:04:04.407 ***** 2026-02-28 00:31:42.096257 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.096278 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.096297 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.096317 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.096335 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.096356 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.096376 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.096395 | orchestrator | 2026-02-28 00:31:42.096414 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-28 00:31:42.096433 | orchestrator | Saturday 28 February 2026 00:30:33 +0000 (0:00:01.458) 0:04:05.865 ***** 2026-02-28 00:31:42.096453 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.096473 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.096492 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.096513 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.096531 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.096650 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.096671 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.096690 | orchestrator | 2026-02-28 00:31:42.096709 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-28 00:31:42.096727 | orchestrator | Saturday 28 February 2026 00:30:35 +0000 (0:00:01.063) 0:04:06.929 ***** 2026-02-28 00:31:42.096745 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.096763 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.096782 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.096801 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.096819 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.096838 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.096856 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.096875 | orchestrator | 2026-02-28 00:31:42.096894 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-28 00:31:42.096915 | orchestrator | Saturday 28 February 2026 00:30:35 +0000 (0:00:00.284) 0:04:07.214 ***** 2026-02-28 00:31:42.096934 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.096952 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.096970 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.096989 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.097007 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.097025 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.097086 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.097106 | orchestrator | 2026-02-28 00:31:42.097125 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-28 00:31:42.097144 | orchestrator | Saturday 28 February 2026 00:30:35 +0000 (0:00:00.302) 0:04:07.517 ***** 2026-02-28 00:31:42.097163 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.097206 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.097224 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.097243 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.097261 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.097280 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.097298 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.097316 | orchestrator | 2026-02-28 00:31:42.097335 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-28 00:31:42.097353 | orchestrator | Saturday 28 February 2026 00:30:35 +0000 (0:00:00.339) 0:04:07.857 ***** 2026-02-28 00:31:42.097372 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.097391 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.097409 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.097447 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.097464 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.097482 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.097500 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.097518 | orchestrator | 2026-02-28 00:31:42.097537 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-28 00:31:42.097555 | orchestrator | Saturday 28 February 2026 00:30:41 +0000 (0:00:05.625) 0:04:13.482 ***** 2026-02-28 00:31:42.097578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:42.097600 | orchestrator | 2026-02-28 00:31:42.097618 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-28 00:31:42.097636 | orchestrator | Saturday 28 February 2026 00:30:41 +0000 (0:00:00.376) 0:04:13.858 ***** 2026-02-28 00:31:42.097654 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.097671 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-28 00:31:42.097690 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:42.097709 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.097727 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-28 00:31:42.097745 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.097763 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-28 00:31:42.097783 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:42.097802 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:42.097821 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.097840 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-28 00:31:42.097860 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.097879 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:42.097898 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-28 00:31:42.097917 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:42.097937 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.097986 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-28 00:31:42.098006 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:42.098186 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-28 00:31:42.098207 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-28 00:31:42.098224 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:42.098243 | orchestrator | 2026-02-28 00:31:42.098262 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-28 00:31:42.098280 | orchestrator | Saturday 28 February 2026 00:30:42 +0000 (0:00:00.316) 0:04:14.175 ***** 2026-02-28 00:31:42.098297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:42.098315 | orchestrator | 2026-02-28 00:31:42.098331 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-28 00:31:42.098347 | orchestrator | Saturday 28 February 2026 00:30:42 +0000 (0:00:00.394) 0:04:14.570 ***** 2026-02-28 00:31:42.098364 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-28 00:31:42.098382 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:42.098400 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-28 00:31:42.098419 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-28 00:31:42.098437 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:42.098457 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:42.098477 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-28 00:31:42.098517 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-28 00:31:42.098538 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:42.098581 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-28 00:31:42.098604 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:42.098624 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:42.098644 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-28 00:31:42.098662 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:42.098679 | orchestrator | 2026-02-28 00:31:42.098697 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-28 00:31:42.098714 | orchestrator | Saturday 28 February 2026 00:30:42 +0000 (0:00:00.300) 0:04:14.870 ***** 2026-02-28 00:31:42.098732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:42.098750 | orchestrator | 2026-02-28 00:31:42.098767 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-28 00:31:42.098783 | orchestrator | Saturday 28 February 2026 00:30:43 +0000 (0:00:00.381) 0:04:15.251 ***** 2026-02-28 00:31:42.098799 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:42.098814 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:42.098831 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:42.098847 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:42.098862 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:42.098878 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:42.098895 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:42.098912 | orchestrator | 2026-02-28 00:31:42.098927 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-28 00:31:42.098943 | orchestrator | Saturday 28 February 2026 00:31:18 +0000 (0:00:35.255) 0:04:50.506 ***** 2026-02-28 00:31:42.098958 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:42.098974 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:42.098990 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:42.099005 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:42.099021 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:42.099036 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:42.099123 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:42.099142 | orchestrator | 2026-02-28 00:31:42.099160 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-28 00:31:42.099176 | orchestrator | Saturday 28 February 2026 00:31:26 +0000 (0:00:07.858) 0:04:58.365 ***** 2026-02-28 00:31:42.099191 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:42.099207 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:42.099223 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:42.099238 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:42.099254 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:42.099269 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:42.099285 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:42.099302 | orchestrator | 2026-02-28 00:31:42.099319 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-28 00:31:42.099335 | orchestrator | Saturday 28 February 2026 00:31:34 +0000 (0:00:07.676) 0:05:06.041 ***** 2026-02-28 00:31:42.099351 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:42.099366 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:42.099382 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:42.099399 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:42.099416 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:42.099433 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:42.099450 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:42.099467 | orchestrator | 2026-02-28 00:31:42.099484 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-28 00:31:42.099516 | orchestrator | Saturday 28 February 2026 00:31:35 +0000 (0:00:01.812) 0:05:07.854 ***** 2026-02-28 00:31:42.099533 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:42.099549 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:42.099565 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:42.099581 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:42.099598 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:42.099615 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:42.099632 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:42.099649 | orchestrator | 2026-02-28 00:31:42.099685 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-28 00:31:52.943152 | orchestrator | Saturday 28 February 2026 00:31:42 +0000 (0:00:06.137) 0:05:13.991 ***** 2026-02-28 00:31:52.943237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:52.943247 | orchestrator | 2026-02-28 00:31:52.943254 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-28 00:31:52.943261 | orchestrator | Saturday 28 February 2026 00:31:42 +0000 (0:00:00.427) 0:05:14.419 ***** 2026-02-28 00:31:52.943268 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:52.943275 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:52.943280 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:52.943286 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:52.943292 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:52.943298 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:52.943304 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:52.943310 | orchestrator | 2026-02-28 00:31:52.943316 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-28 00:31:52.943322 | orchestrator | Saturday 28 February 2026 00:31:43 +0000 (0:00:00.749) 0:05:15.169 ***** 2026-02-28 00:31:52.943328 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:52.943334 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:52.943340 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:52.943346 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:52.943352 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:52.943359 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:52.943368 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:52.943379 | orchestrator | 2026-02-28 00:31:52.943390 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-28 00:31:52.943400 | orchestrator | Saturday 28 February 2026 00:31:44 +0000 (0:00:01.621) 0:05:16.790 ***** 2026-02-28 00:31:52.943409 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:31:52.943418 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:31:52.943427 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:31:52.943435 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:31:52.943444 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:31:52.943452 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:31:52.943462 | orchestrator | changed: [testbed-manager] 2026-02-28 00:31:52.943471 | orchestrator | 2026-02-28 00:31:52.943480 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-28 00:31:52.943491 | orchestrator | Saturday 28 February 2026 00:31:45 +0000 (0:00:00.767) 0:05:17.558 ***** 2026-02-28 00:31:52.943501 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:52.943511 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:52.943518 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:52.943523 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:52.943528 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:52.943533 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:52.943538 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:52.943543 | orchestrator | 2026-02-28 00:31:52.943548 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-28 00:31:52.943554 | orchestrator | Saturday 28 February 2026 00:31:45 +0000 (0:00:00.311) 0:05:17.869 ***** 2026-02-28 00:31:52.943572 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:52.943577 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:52.943582 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:52.943587 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:52.943592 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:52.943597 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:52.943602 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:52.943607 | orchestrator | 2026-02-28 00:31:52.943612 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-28 00:31:52.943617 | orchestrator | Saturday 28 February 2026 00:31:46 +0000 (0:00:00.402) 0:05:18.272 ***** 2026-02-28 00:31:52.943622 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:52.943627 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:52.943632 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:52.943637 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:52.943642 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:52.943647 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:52.943658 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:52.943663 | orchestrator | 2026-02-28 00:31:52.943668 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-28 00:31:52.943673 | orchestrator | Saturday 28 February 2026 00:31:46 +0000 (0:00:00.303) 0:05:18.576 ***** 2026-02-28 00:31:52.943678 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:52.943683 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:52.943688 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:52.943693 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:52.943698 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:52.943703 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:52.943708 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:52.943713 | orchestrator | 2026-02-28 00:31:52.943718 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-28 00:31:52.943724 | orchestrator | Saturday 28 February 2026 00:31:46 +0000 (0:00:00.294) 0:05:18.871 ***** 2026-02-28 00:31:52.943729 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:52.943734 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:52.943739 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:52.943744 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:52.943749 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:52.943754 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:52.943759 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:52.943764 | orchestrator | 2026-02-28 00:31:52.943769 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-28 00:31:52.943774 | orchestrator | Saturday 28 February 2026 00:31:47 +0000 (0:00:00.294) 0:05:19.165 ***** 2026-02-28 00:31:52.943779 | orchestrator | ok: [testbed-node-3] =>  2026-02-28 00:31:52.943784 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943789 | orchestrator | ok: [testbed-node-4] =>  2026-02-28 00:31:52.943794 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943799 | orchestrator | ok: [testbed-node-5] =>  2026-02-28 00:31:52.943804 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943809 | orchestrator | ok: [testbed-manager] =>  2026-02-28 00:31:52.943814 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943830 | orchestrator | ok: [testbed-node-0] =>  2026-02-28 00:31:52.943835 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943840 | orchestrator | ok: [testbed-node-1] =>  2026-02-28 00:31:52.943845 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943850 | orchestrator | ok: [testbed-node-2] =>  2026-02-28 00:31:52.943855 | orchestrator |  docker_version: 5:27.5.1 2026-02-28 00:31:52.943860 | orchestrator | 2026-02-28 00:31:52.943865 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-28 00:31:52.943870 | orchestrator | Saturday 28 February 2026 00:31:47 +0000 (0:00:00.302) 0:05:19.468 ***** 2026-02-28 00:31:52.943875 | orchestrator | ok: [testbed-node-3] =>  2026-02-28 00:31:52.943884 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943889 | orchestrator | ok: [testbed-node-4] =>  2026-02-28 00:31:52.943894 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943899 | orchestrator | ok: [testbed-node-5] =>  2026-02-28 00:31:52.943904 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943909 | orchestrator | ok: [testbed-manager] =>  2026-02-28 00:31:52.943914 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943919 | orchestrator | ok: [testbed-node-0] =>  2026-02-28 00:31:52.943924 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943929 | orchestrator | ok: [testbed-node-1] =>  2026-02-28 00:31:52.943934 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943939 | orchestrator | ok: [testbed-node-2] =>  2026-02-28 00:31:52.943943 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-28 00:31:52.943948 | orchestrator | 2026-02-28 00:31:52.943953 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-28 00:31:52.943958 | orchestrator | Saturday 28 February 2026 00:31:47 +0000 (0:00:00.303) 0:05:19.771 ***** 2026-02-28 00:31:52.943963 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:52.943968 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:52.943973 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:52.943978 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:52.943983 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:52.943988 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:52.943993 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:52.943998 | orchestrator | 2026-02-28 00:31:52.944003 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-28 00:31:52.944008 | orchestrator | Saturday 28 February 2026 00:31:48 +0000 (0:00:00.287) 0:05:20.058 ***** 2026-02-28 00:31:52.944013 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:52.944018 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:52.944023 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:52.944028 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:31:52.944033 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:31:52.944037 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:31:52.944042 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:31:52.944076 | orchestrator | 2026-02-28 00:31:52.944081 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-28 00:31:52.944086 | orchestrator | Saturday 28 February 2026 00:31:48 +0000 (0:00:00.295) 0:05:20.353 ***** 2026-02-28 00:31:52.944093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:31:52.944099 | orchestrator | 2026-02-28 00:31:52.944104 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-28 00:31:52.944110 | orchestrator | Saturday 28 February 2026 00:31:48 +0000 (0:00:00.511) 0:05:20.865 ***** 2026-02-28 00:31:52.944115 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:52.944120 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:52.944125 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:52.944130 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:52.944135 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:52.944140 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:52.944144 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:52.944149 | orchestrator | 2026-02-28 00:31:52.944154 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-28 00:31:52.944160 | orchestrator | Saturday 28 February 2026 00:31:49 +0000 (0:00:00.806) 0:05:21.671 ***** 2026-02-28 00:31:52.944168 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:31:52.944173 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:31:52.944178 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:31:52.944183 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:31:52.944188 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:31:52.944196 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:31:52.944201 | orchestrator | ok: [testbed-manager] 2026-02-28 00:31:52.944206 | orchestrator | 2026-02-28 00:31:52.944211 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-28 00:31:52.944217 | orchestrator | Saturday 28 February 2026 00:31:52 +0000 (0:00:02.803) 0:05:24.474 ***** 2026-02-28 00:31:52.944222 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-28 00:31:52.944228 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-28 00:31:52.944233 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-28 00:31:52.944238 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-28 00:31:52.944243 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-28 00:31:52.944248 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:31:52.944253 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-28 00:31:52.944258 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-28 00:31:52.944263 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-28 00:31:52.944268 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-28 00:31:52.944273 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:31:52.944278 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-28 00:31:52.944283 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-28 00:31:52.944288 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:31:52.944293 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-28 00:31:52.944298 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-28 00:31:52.944306 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-28 00:32:52.197305 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-28 00:32:52.197394 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:52.197406 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-28 00:32:52.197413 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-28 00:32:52.197420 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-28 00:32:52.197426 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:52.197432 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:52.197439 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-28 00:32:52.197445 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-28 00:32:52.197451 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-28 00:32:52.197457 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:52.197463 | orchestrator | 2026-02-28 00:32:52.197470 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-28 00:32:52.197478 | orchestrator | Saturday 28 February 2026 00:31:53 +0000 (0:00:00.595) 0:05:25.070 ***** 2026-02-28 00:32:52.197488 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.197499 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.197508 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.197518 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.197528 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.197537 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.197547 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.197556 | orchestrator | 2026-02-28 00:32:52.197566 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-28 00:32:52.197575 | orchestrator | Saturday 28 February 2026 00:31:59 +0000 (0:00:06.415) 0:05:31.485 ***** 2026-02-28 00:32:52.197583 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.197592 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.197600 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.197609 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.197617 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.197626 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.197659 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.197665 | orchestrator | 2026-02-28 00:32:52.197670 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-28 00:32:52.197676 | orchestrator | Saturday 28 February 2026 00:32:00 +0000 (0:00:01.069) 0:05:32.555 ***** 2026-02-28 00:32:52.197681 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.197686 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.197692 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.197699 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.197708 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.197716 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.197725 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.197734 | orchestrator | 2026-02-28 00:32:52.197743 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-28 00:32:52.197752 | orchestrator | Saturday 28 February 2026 00:32:08 +0000 (0:00:07.991) 0:05:40.546 ***** 2026-02-28 00:32:52.197762 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.197771 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.197779 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.197784 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:52.197789 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.197797 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.197806 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.197814 | orchestrator | 2026-02-28 00:32:52.197823 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-28 00:32:52.197832 | orchestrator | Saturday 28 February 2026 00:32:11 +0000 (0:00:03.279) 0:05:43.826 ***** 2026-02-28 00:32:52.197842 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.197851 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.197860 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.197870 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.197879 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.197888 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.197897 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.197906 | orchestrator | 2026-02-28 00:32:52.197930 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-28 00:32:52.197939 | orchestrator | Saturday 28 February 2026 00:32:13 +0000 (0:00:01.507) 0:05:45.334 ***** 2026-02-28 00:32:52.197945 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.197951 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.197957 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.197964 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.197970 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.197976 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.197982 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.197991 | orchestrator | 2026-02-28 00:32:52.198000 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-28 00:32:52.198009 | orchestrator | Saturday 28 February 2026 00:32:14 +0000 (0:00:01.287) 0:05:46.622 ***** 2026-02-28 00:32:52.198097 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:52.198108 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:52.198117 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:52.198126 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:52.198135 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:52.198144 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:52.198153 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:52.198162 | orchestrator | 2026-02-28 00:32:52.198171 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-28 00:32:52.198180 | orchestrator | Saturday 28 February 2026 00:32:15 +0000 (0:00:00.845) 0:05:47.467 ***** 2026-02-28 00:32:52.198189 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.198198 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.198207 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.198226 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.198235 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.198244 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.198253 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.198262 | orchestrator | 2026-02-28 00:32:52.198271 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-28 00:32:52.198297 | orchestrator | Saturday 28 February 2026 00:32:24 +0000 (0:00:09.418) 0:05:56.886 ***** 2026-02-28 00:32:52.198307 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.198316 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.198325 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.198334 | orchestrator | changed: [testbed-manager] 2026-02-28 00:32:52.198342 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.198351 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.198360 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.198369 | orchestrator | 2026-02-28 00:32:52.198378 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-28 00:32:52.198387 | orchestrator | Saturday 28 February 2026 00:32:25 +0000 (0:00:00.924) 0:05:57.811 ***** 2026-02-28 00:32:52.198396 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.198405 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.198413 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.198422 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.198431 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.198439 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.198448 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.198458 | orchestrator | 2026-02-28 00:32:52.198466 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-28 00:32:52.198476 | orchestrator | Saturday 28 February 2026 00:32:34 +0000 (0:00:08.923) 0:06:06.734 ***** 2026-02-28 00:32:52.198485 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.198493 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.198502 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.198510 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.198520 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.198528 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.198537 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.198546 | orchestrator | 2026-02-28 00:32:52.198555 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-28 00:32:52.198563 | orchestrator | Saturday 28 February 2026 00:32:45 +0000 (0:00:10.766) 0:06:17.500 ***** 2026-02-28 00:32:52.198572 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-28 00:32:52.198582 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-28 00:32:52.198591 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-28 00:32:52.198600 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-28 00:32:52.198608 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-28 00:32:52.198617 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-28 00:32:52.198627 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-28 00:32:52.198635 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-28 00:32:52.198644 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-28 00:32:52.198653 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-28 00:32:52.198662 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-28 00:32:52.198671 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-28 00:32:52.198680 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-28 00:32:52.198689 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-28 00:32:52.198697 | orchestrator | 2026-02-28 00:32:52.198706 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-28 00:32:52.198716 | orchestrator | Saturday 28 February 2026 00:32:46 +0000 (0:00:01.251) 0:06:18.751 ***** 2026-02-28 00:32:52.198731 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:52.198740 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:52.198749 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:52.198758 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:52.198766 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:52.198775 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:52.198784 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:52.198793 | orchestrator | 2026-02-28 00:32:52.198802 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-28 00:32:52.198810 | orchestrator | Saturday 28 February 2026 00:32:47 +0000 (0:00:00.530) 0:06:19.282 ***** 2026-02-28 00:32:52.198819 | orchestrator | ok: [testbed-manager] 2026-02-28 00:32:52.198828 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:32:52.198837 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:32:52.198846 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:32:52.198855 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:32:52.198864 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:32:52.198872 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:32:52.198881 | orchestrator | 2026-02-28 00:32:52.198890 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-28 00:32:52.198899 | orchestrator | Saturday 28 February 2026 00:32:51 +0000 (0:00:03.856) 0:06:23.138 ***** 2026-02-28 00:32:52.198908 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:52.198918 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:52.198924 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:52.198929 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:32:52.198934 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:32:52.198940 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:32:52.198945 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:32:52.198950 | orchestrator | 2026-02-28 00:32:52.198956 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-28 00:32:52.198961 | orchestrator | Saturday 28 February 2026 00:32:51 +0000 (0:00:00.678) 0:06:23.817 ***** 2026-02-28 00:32:52.198966 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-28 00:32:52.198972 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-28 00:32:52.199019 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:32:52.199030 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-28 00:32:52.199039 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-28 00:32:52.199048 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:32:52.199057 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-28 00:32:52.199066 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-28 00:32:52.199120 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:32:52.199138 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-28 00:33:11.221419 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-28 00:33:11.221508 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:11.221515 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-28 00:33:11.221520 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-28 00:33:11.221525 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:11.221529 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-28 00:33:11.221533 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-28 00:33:11.221537 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:11.221541 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-28 00:33:11.221545 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-28 00:33:11.221549 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:11.221553 | orchestrator | 2026-02-28 00:33:11.221558 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-28 00:33:11.221581 | orchestrator | Saturday 28 February 2026 00:32:52 +0000 (0:00:00.588) 0:06:24.406 ***** 2026-02-28 00:33:11.221611 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:11.221617 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:11.221621 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:11.221626 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:11.221630 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:11.221635 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:11.221639 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:11.221643 | orchestrator | 2026-02-28 00:33:11.221647 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-28 00:33:11.221651 | orchestrator | Saturday 28 February 2026 00:32:53 +0000 (0:00:00.514) 0:06:24.920 ***** 2026-02-28 00:33:11.221656 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:11.221660 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:11.221664 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:11.221668 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:11.221672 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:11.221676 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:11.221679 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:11.221684 | orchestrator | 2026-02-28 00:33:11.221688 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-28 00:33:11.221692 | orchestrator | Saturday 28 February 2026 00:32:53 +0000 (0:00:00.520) 0:06:25.440 ***** 2026-02-28 00:33:11.221696 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:11.221700 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:11.221703 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:11.221707 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:11.221711 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:11.221715 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:11.221719 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:11.221723 | orchestrator | 2026-02-28 00:33:11.221727 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-28 00:33:11.221731 | orchestrator | Saturday 28 February 2026 00:32:54 +0000 (0:00:00.518) 0:06:25.959 ***** 2026-02-28 00:33:11.221734 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:11.221739 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:11.221743 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.221746 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:11.221750 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:11.221754 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:11.221758 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:11.221762 | orchestrator | 2026-02-28 00:33:11.221766 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-28 00:33:11.221770 | orchestrator | Saturday 28 February 2026 00:32:55 +0000 (0:00:01.864) 0:06:27.824 ***** 2026-02-28 00:33:11.221774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:11.221780 | orchestrator | 2026-02-28 00:33:11.221793 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-28 00:33:11.221798 | orchestrator | Saturday 28 February 2026 00:32:56 +0000 (0:00:00.849) 0:06:28.673 ***** 2026-02-28 00:33:11.221802 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:11.221805 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:11.221809 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:11.221813 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.221817 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:11.221823 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:11.221829 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:11.221836 | orchestrator | 2026-02-28 00:33:11.221843 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-28 00:33:11.221856 | orchestrator | Saturday 28 February 2026 00:32:57 +0000 (0:00:00.845) 0:06:29.519 ***** 2026-02-28 00:33:11.221863 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:11.221870 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:11.221876 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:11.221882 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.221889 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:11.221896 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:11.221903 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:11.221911 | orchestrator | 2026-02-28 00:33:11.221919 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-28 00:33:11.221926 | orchestrator | Saturday 28 February 2026 00:32:58 +0000 (0:00:01.053) 0:06:30.573 ***** 2026-02-28 00:33:11.221934 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:11.221939 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:11.221942 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:11.221946 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.221950 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:11.221956 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:11.221962 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:11.221968 | orchestrator | 2026-02-28 00:33:11.221974 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-28 00:33:11.221995 | orchestrator | Saturday 28 February 2026 00:33:00 +0000 (0:00:01.337) 0:06:31.911 ***** 2026-02-28 00:33:11.222002 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:11.222010 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:11.222053 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:11.222062 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:11.222069 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:11.222076 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:11.222102 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:11.222108 | orchestrator | 2026-02-28 00:33:11.222114 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-28 00:33:11.222121 | orchestrator | Saturday 28 February 2026 00:33:01 +0000 (0:00:01.364) 0:06:33.275 ***** 2026-02-28 00:33:11.222128 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:11.222135 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:11.222140 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:11.222144 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.222150 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:11.222156 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:11.222163 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:11.222169 | orchestrator | 2026-02-28 00:33:11.222177 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-28 00:33:11.222183 | orchestrator | Saturday 28 February 2026 00:33:02 +0000 (0:00:01.272) 0:06:34.548 ***** 2026-02-28 00:33:11.222189 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:11.222196 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:11.222202 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:11.222208 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:11.222214 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:11.222220 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:11.222226 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:11.222233 | orchestrator | 2026-02-28 00:33:11.222240 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-28 00:33:11.222246 | orchestrator | Saturday 28 February 2026 00:33:04 +0000 (0:00:01.450) 0:06:35.999 ***** 2026-02-28 00:33:11.222254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:11.222261 | orchestrator | 2026-02-28 00:33:11.222267 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-28 00:33:11.222274 | orchestrator | Saturday 28 February 2026 00:33:05 +0000 (0:00:01.026) 0:06:37.025 ***** 2026-02-28 00:33:11.222294 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:11.222302 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:11.222310 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:11.222316 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.222323 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:11.222329 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:11.222336 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:11.222343 | orchestrator | 2026-02-28 00:33:11.222349 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-28 00:33:11.222354 | orchestrator | Saturday 28 February 2026 00:33:06 +0000 (0:00:01.349) 0:06:38.375 ***** 2026-02-28 00:33:11.222361 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:11.222368 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:11.222374 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:11.222379 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.222386 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:11.222392 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:11.222398 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:11.222403 | orchestrator | 2026-02-28 00:33:11.222410 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-28 00:33:11.222417 | orchestrator | Saturday 28 February 2026 00:33:07 +0000 (0:00:01.145) 0:06:39.520 ***** 2026-02-28 00:33:11.222423 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:11.222429 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:11.222439 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:11.222445 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.222452 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:11.222459 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:11.222466 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:11.222473 | orchestrator | 2026-02-28 00:33:11.222480 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-28 00:33:11.222487 | orchestrator | Saturday 28 February 2026 00:33:08 +0000 (0:00:01.149) 0:06:40.670 ***** 2026-02-28 00:33:11.222493 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:11.222500 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:11.222506 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:11.222512 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:11.222519 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:11.222525 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:11.222532 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:11.222538 | orchestrator | 2026-02-28 00:33:11.222545 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-28 00:33:11.222551 | orchestrator | Saturday 28 February 2026 00:33:10 +0000 (0:00:01.409) 0:06:42.080 ***** 2026-02-28 00:33:11.222558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:11.222565 | orchestrator | 2026-02-28 00:33:11.222572 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:11.222578 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.898) 0:06:42.978 ***** 2026-02-28 00:33:11.222585 | orchestrator | 2026-02-28 00:33:11.222591 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:11.222597 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.043) 0:06:43.022 ***** 2026-02-28 00:33:11.222604 | orchestrator | 2026-02-28 00:33:11.222610 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:11.222617 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.037) 0:06:43.060 ***** 2026-02-28 00:33:11.222623 | orchestrator | 2026-02-28 00:33:11.222629 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:11.222642 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.059) 0:06:43.119 ***** 2026-02-28 00:33:36.930665 | orchestrator | 2026-02-28 00:33:36.930785 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:36.930830 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.047) 0:06:43.167 ***** 2026-02-28 00:33:36.930843 | orchestrator | 2026-02-28 00:33:36.930855 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:36.930865 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.038) 0:06:43.206 ***** 2026-02-28 00:33:36.930876 | orchestrator | 2026-02-28 00:33:36.930887 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-28 00:33:36.930898 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.038) 0:06:43.244 ***** 2026-02-28 00:33:36.930909 | orchestrator | 2026-02-28 00:33:36.930920 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-28 00:33:36.930930 | orchestrator | Saturday 28 February 2026 00:33:11 +0000 (0:00:00.044) 0:06:43.289 ***** 2026-02-28 00:33:36.930941 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:36.930953 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:36.930963 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:36.930974 | orchestrator | 2026-02-28 00:33:36.930985 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-28 00:33:36.930995 | orchestrator | Saturday 28 February 2026 00:33:12 +0000 (0:00:01.110) 0:06:44.399 ***** 2026-02-28 00:33:36.931006 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:36.931018 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:36.931029 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:36.931039 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:36.931050 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:36.931060 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:36.931071 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:36.931082 | orchestrator | 2026-02-28 00:33:36.931130 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-28 00:33:36.931150 | orchestrator | Saturday 28 February 2026 00:33:13 +0000 (0:00:01.495) 0:06:45.894 ***** 2026-02-28 00:33:36.931170 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:36.931202 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:36.931219 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:36.931238 | orchestrator | changed: [testbed-manager] 2026-02-28 00:33:36.931257 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:36.931275 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:36.931287 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:36.931299 | orchestrator | 2026-02-28 00:33:36.931312 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-28 00:33:36.931325 | orchestrator | Saturday 28 February 2026 00:33:15 +0000 (0:00:01.206) 0:06:47.101 ***** 2026-02-28 00:33:36.931337 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:36.931348 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:36.931360 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:36.931372 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:36.931385 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:36.931397 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:36.931409 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:36.931422 | orchestrator | 2026-02-28 00:33:36.931434 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-28 00:33:36.931447 | orchestrator | Saturday 28 February 2026 00:33:17 +0000 (0:00:02.466) 0:06:49.568 ***** 2026-02-28 00:33:36.931459 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:36.931471 | orchestrator | 2026-02-28 00:33:36.931483 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-28 00:33:36.931496 | orchestrator | Saturday 28 February 2026 00:33:17 +0000 (0:00:00.097) 0:06:49.665 ***** 2026-02-28 00:33:36.931509 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:36.931521 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:36.931534 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:36.931546 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:36.931559 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:36.931578 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:33:36.931589 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:36.931599 | orchestrator | 2026-02-28 00:33:36.931625 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-28 00:33:36.931638 | orchestrator | Saturday 28 February 2026 00:33:18 +0000 (0:00:01.034) 0:06:50.699 ***** 2026-02-28 00:33:36.931648 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:36.931659 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:36.931669 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:36.931680 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:36.931690 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:36.931701 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:36.931711 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:36.931722 | orchestrator | 2026-02-28 00:33:36.931733 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-28 00:33:36.931743 | orchestrator | Saturday 28 February 2026 00:33:19 +0000 (0:00:00.704) 0:06:51.404 ***** 2026-02-28 00:33:36.931755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:36.931769 | orchestrator | 2026-02-28 00:33:36.931780 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-28 00:33:36.931790 | orchestrator | Saturday 28 February 2026 00:33:20 +0000 (0:00:00.870) 0:06:52.275 ***** 2026-02-28 00:33:36.931801 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:36.931812 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:36.931822 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:36.931833 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:36.931844 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:36.931854 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:36.931865 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:36.931875 | orchestrator | 2026-02-28 00:33:36.931886 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-28 00:33:36.931897 | orchestrator | Saturday 28 February 2026 00:33:21 +0000 (0:00:00.825) 0:06:53.101 ***** 2026-02-28 00:33:36.931908 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-28 00:33:36.931939 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-28 00:33:36.931951 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-28 00:33:36.931962 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-28 00:33:36.931972 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-28 00:33:36.931983 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-28 00:33:36.931994 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-28 00:33:36.932004 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-28 00:33:36.932015 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-28 00:33:36.932026 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-28 00:33:36.932036 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-28 00:33:36.932047 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-28 00:33:36.932057 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-28 00:33:36.932068 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-28 00:33:36.932078 | orchestrator | 2026-02-28 00:33:36.932089 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-28 00:33:36.932138 | orchestrator | Saturday 28 February 2026 00:33:23 +0000 (0:00:02.637) 0:06:55.739 ***** 2026-02-28 00:33:36.932150 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:36.932161 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:36.932172 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:36.932182 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:36.932209 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:36.932228 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:36.932245 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:36.932263 | orchestrator | 2026-02-28 00:33:36.932281 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-28 00:33:36.932297 | orchestrator | Saturday 28 February 2026 00:33:24 +0000 (0:00:00.515) 0:06:56.254 ***** 2026-02-28 00:33:36.932309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:33:36.932322 | orchestrator | 2026-02-28 00:33:36.932333 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-28 00:33:36.932344 | orchestrator | Saturday 28 February 2026 00:33:25 +0000 (0:00:00.865) 0:06:57.119 ***** 2026-02-28 00:33:36.932354 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:36.932365 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:36.932375 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:36.932386 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:36.932396 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:36.932407 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:36.932417 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:36.932427 | orchestrator | 2026-02-28 00:33:36.932438 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-28 00:33:36.932449 | orchestrator | Saturday 28 February 2026 00:33:26 +0000 (0:00:00.912) 0:06:58.031 ***** 2026-02-28 00:33:36.932460 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:36.932470 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:36.932480 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:36.932491 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:36.932501 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:36.932511 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:36.932522 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:36.932532 | orchestrator | 2026-02-28 00:33:36.932543 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-28 00:33:36.932553 | orchestrator | Saturday 28 February 2026 00:33:27 +0000 (0:00:01.117) 0:06:59.149 ***** 2026-02-28 00:33:36.932564 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:36.932575 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:36.932585 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:36.932603 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:36.932614 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:36.932625 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:36.932635 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:36.932646 | orchestrator | 2026-02-28 00:33:36.932657 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-28 00:33:36.932667 | orchestrator | Saturday 28 February 2026 00:33:27 +0000 (0:00:00.513) 0:06:59.663 ***** 2026-02-28 00:33:36.932678 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:33:36.932688 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:33:36.932699 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:33:36.932709 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:36.932720 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:33:36.932730 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:33:36.932740 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:33:36.932751 | orchestrator | 2026-02-28 00:33:36.932761 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-28 00:33:36.932772 | orchestrator | Saturday 28 February 2026 00:33:29 +0000 (0:00:01.557) 0:07:01.220 ***** 2026-02-28 00:33:36.932782 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:33:36.932793 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:33:36.932803 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:33:36.932814 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:33:36.932824 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:33:36.932842 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:33:36.932853 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:33:36.932863 | orchestrator | 2026-02-28 00:33:36.932874 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-28 00:33:36.932884 | orchestrator | Saturday 28 February 2026 00:33:29 +0000 (0:00:00.540) 0:07:01.760 ***** 2026-02-28 00:33:36.932895 | orchestrator | ok: [testbed-manager] 2026-02-28 00:33:36.932905 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:33:36.932916 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:33:36.932926 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:33:36.932937 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:33:36.932947 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:33:36.932966 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:08.671866 | orchestrator | 2026-02-28 00:34:08.672004 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-28 00:34:08.672034 | orchestrator | Saturday 28 February 2026 00:33:36 +0000 (0:00:07.142) 0:07:08.902 ***** 2026-02-28 00:34:08.672055 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:08.672076 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:08.672095 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.672217 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:08.672236 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:08.672256 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:08.672276 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:08.672294 | orchestrator | 2026-02-28 00:34:08.672313 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-28 00:34:08.672332 | orchestrator | Saturday 28 February 2026 00:33:38 +0000 (0:00:01.605) 0:07:10.508 ***** 2026-02-28 00:34:08.672351 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:08.672369 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.672388 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:08.672407 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:08.672425 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:08.672438 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:08.672457 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:08.672475 | orchestrator | 2026-02-28 00:34:08.672490 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-28 00:34:08.672505 | orchestrator | Saturday 28 February 2026 00:33:40 +0000 (0:00:01.669) 0:07:12.178 ***** 2026-02-28 00:34:08.672534 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:08.672556 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:08.672574 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:08.672591 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.672608 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:08.672626 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:08.672644 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:08.672661 | orchestrator | 2026-02-28 00:34:08.672679 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:34:08.672698 | orchestrator | Saturday 28 February 2026 00:33:42 +0000 (0:00:01.761) 0:07:13.940 ***** 2026-02-28 00:34:08.672716 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.672736 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.672754 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.672773 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.672791 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.672810 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.672822 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.672833 | orchestrator | 2026-02-28 00:34:08.672844 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:34:08.672855 | orchestrator | Saturday 28 February 2026 00:33:43 +0000 (0:00:01.100) 0:07:15.040 ***** 2026-02-28 00:34:08.672866 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:08.672877 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:08.672888 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:08.672928 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:08.672940 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:08.672951 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:08.672961 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:08.672973 | orchestrator | 2026-02-28 00:34:08.672984 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-28 00:34:08.672995 | orchestrator | Saturday 28 February 2026 00:33:43 +0000 (0:00:00.866) 0:07:15.907 ***** 2026-02-28 00:34:08.673006 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:08.673017 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:08.673028 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:08.673038 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:08.673049 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:08.673059 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:08.673070 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:08.673080 | orchestrator | 2026-02-28 00:34:08.673091 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-28 00:34:08.673127 | orchestrator | Saturday 28 February 2026 00:33:44 +0000 (0:00:00.509) 0:07:16.416 ***** 2026-02-28 00:34:08.673138 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.673149 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.673160 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.673170 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.673181 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.673192 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.673202 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.673213 | orchestrator | 2026-02-28 00:34:08.673223 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-28 00:34:08.673234 | orchestrator | Saturday 28 February 2026 00:33:45 +0000 (0:00:00.503) 0:07:16.920 ***** 2026-02-28 00:34:08.673245 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.673255 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.673266 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.673277 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.673287 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.673297 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.673308 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.673318 | orchestrator | 2026-02-28 00:34:08.673329 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-28 00:34:08.673340 | orchestrator | Saturday 28 February 2026 00:33:45 +0000 (0:00:00.683) 0:07:17.603 ***** 2026-02-28 00:34:08.673351 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.673361 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.673372 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.673382 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.673393 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.673403 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.673414 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.673424 | orchestrator | 2026-02-28 00:34:08.673435 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-28 00:34:08.673446 | orchestrator | Saturday 28 February 2026 00:33:46 +0000 (0:00:00.506) 0:07:18.110 ***** 2026-02-28 00:34:08.673457 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.673467 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.673478 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.673488 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.673499 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.673510 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.673520 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.673531 | orchestrator | 2026-02-28 00:34:08.673564 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-28 00:34:08.673576 | orchestrator | Saturday 28 February 2026 00:33:51 +0000 (0:00:05.500) 0:07:23.610 ***** 2026-02-28 00:34:08.673587 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:08.673598 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:08.673644 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:08.673656 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:08.673667 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:08.673678 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:08.673688 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:08.673699 | orchestrator | 2026-02-28 00:34:08.673716 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-28 00:34:08.673735 | orchestrator | Saturday 28 February 2026 00:33:52 +0000 (0:00:00.522) 0:07:24.133 ***** 2026-02-28 00:34:08.673756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:08.673776 | orchestrator | 2026-02-28 00:34:08.673795 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-28 00:34:08.673812 | orchestrator | Saturday 28 February 2026 00:33:53 +0000 (0:00:00.980) 0:07:25.114 ***** 2026-02-28 00:34:08.673830 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.673850 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.673868 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.673889 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.673909 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.673930 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.673949 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.673969 | orchestrator | 2026-02-28 00:34:08.673991 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-28 00:34:08.674010 | orchestrator | Saturday 28 February 2026 00:33:55 +0000 (0:00:01.801) 0:07:26.916 ***** 2026-02-28 00:34:08.674098 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.674133 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.674144 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.674155 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.674165 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.674176 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.674187 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.674198 | orchestrator | 2026-02-28 00:34:08.674209 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-28 00:34:08.674220 | orchestrator | Saturday 28 February 2026 00:33:56 +0000 (0:00:01.107) 0:07:28.023 ***** 2026-02-28 00:34:08.674231 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:08.674241 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:08.674292 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:08.674304 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:08.674314 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:08.674325 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:08.674336 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:08.674347 | orchestrator | 2026-02-28 00:34:08.674358 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-28 00:34:08.674368 | orchestrator | Saturday 28 February 2026 00:33:56 +0000 (0:00:00.844) 0:07:28.868 ***** 2026-02-28 00:34:08.674380 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674393 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674404 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674423 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674434 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674445 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674468 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-28 00:34:08.674479 | orchestrator | 2026-02-28 00:34:08.674490 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-28 00:34:08.674501 | orchestrator | Saturday 28 February 2026 00:33:58 +0000 (0:00:01.963) 0:07:30.831 ***** 2026-02-28 00:34:08.674512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:08.674524 | orchestrator | 2026-02-28 00:34:08.674536 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-28 00:34:08.674555 | orchestrator | Saturday 28 February 2026 00:33:59 +0000 (0:00:00.793) 0:07:31.624 ***** 2026-02-28 00:34:08.674572 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:08.674589 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:08.674606 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:08.674623 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:08.674639 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:08.674658 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:08.674675 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:08.674694 | orchestrator | 2026-02-28 00:34:08.674726 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-28 00:34:39.172894 | orchestrator | Saturday 28 February 2026 00:34:08 +0000 (0:00:08.945) 0:07:40.570 ***** 2026-02-28 00:34:39.172973 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:39.172979 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:39.172983 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:39.172987 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:39.172991 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:39.172995 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:39.172999 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:39.173003 | orchestrator | 2026-02-28 00:34:39.173008 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-28 00:34:39.173012 | orchestrator | Saturday 28 February 2026 00:34:10 +0000 (0:00:01.994) 0:07:42.565 ***** 2026-02-28 00:34:39.173016 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:39.173020 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:39.173024 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:39.173027 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:39.173031 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:39.173035 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:39.173039 | orchestrator | 2026-02-28 00:34:39.173043 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-28 00:34:39.173047 | orchestrator | Saturday 28 February 2026 00:34:11 +0000 (0:00:01.288) 0:07:43.853 ***** 2026-02-28 00:34:39.173051 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173055 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173059 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173063 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173066 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173070 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173074 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173078 | orchestrator | 2026-02-28 00:34:39.173081 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-28 00:34:39.173085 | orchestrator | 2026-02-28 00:34:39.173089 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-28 00:34:39.173093 | orchestrator | Saturday 28 February 2026 00:34:13 +0000 (0:00:01.278) 0:07:45.131 ***** 2026-02-28 00:34:39.173096 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:39.173100 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:39.173156 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:39.173161 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:39.173164 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:39.173168 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:39.173172 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:39.173176 | orchestrator | 2026-02-28 00:34:39.173179 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-28 00:34:39.173183 | orchestrator | 2026-02-28 00:34:39.173187 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-28 00:34:39.173190 | orchestrator | Saturday 28 February 2026 00:34:13 +0000 (0:00:00.741) 0:07:45.873 ***** 2026-02-28 00:34:39.173194 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173198 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173201 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173205 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173209 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173213 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173217 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173220 | orchestrator | 2026-02-28 00:34:39.173224 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-28 00:34:39.173228 | orchestrator | Saturday 28 February 2026 00:34:15 +0000 (0:00:01.351) 0:07:47.225 ***** 2026-02-28 00:34:39.173231 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:39.173242 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:39.173246 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:39.173250 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:39.173253 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:39.173257 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:39.173261 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:39.173265 | orchestrator | 2026-02-28 00:34:39.173268 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-28 00:34:39.173272 | orchestrator | Saturday 28 February 2026 00:34:16 +0000 (0:00:01.428) 0:07:48.653 ***** 2026-02-28 00:34:39.173282 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:34:39.173296 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:34:39.173300 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:34:39.173304 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:34:39.173307 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:34:39.173311 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:34:39.173315 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:34:39.173318 | orchestrator | 2026-02-28 00:34:39.173322 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-28 00:34:39.173326 | orchestrator | Saturday 28 February 2026 00:34:17 +0000 (0:00:00.658) 0:07:49.312 ***** 2026-02-28 00:34:39.173330 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:39.173335 | orchestrator | 2026-02-28 00:34:39.173339 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-28 00:34:39.173342 | orchestrator | Saturday 28 February 2026 00:34:18 +0000 (0:00:00.822) 0:07:50.134 ***** 2026-02-28 00:34:39.173347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:39.173353 | orchestrator | 2026-02-28 00:34:39.173357 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-28 00:34:39.173361 | orchestrator | Saturday 28 February 2026 00:34:19 +0000 (0:00:00.866) 0:07:51.000 ***** 2026-02-28 00:34:39.173364 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173368 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173372 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173375 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173379 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173387 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173390 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173394 | orchestrator | 2026-02-28 00:34:39.173407 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-28 00:34:39.173411 | orchestrator | Saturday 28 February 2026 00:34:27 +0000 (0:00:08.473) 0:07:59.473 ***** 2026-02-28 00:34:39.173415 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173419 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173423 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173426 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173430 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173434 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173438 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173441 | orchestrator | 2026-02-28 00:34:39.173445 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-28 00:34:39.173449 | orchestrator | Saturday 28 February 2026 00:34:28 +0000 (0:00:00.831) 0:08:00.305 ***** 2026-02-28 00:34:39.173453 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173458 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173462 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173466 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173470 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173474 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173478 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173482 | orchestrator | 2026-02-28 00:34:39.173486 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-28 00:34:39.173491 | orchestrator | Saturday 28 February 2026 00:34:29 +0000 (0:00:01.258) 0:08:01.564 ***** 2026-02-28 00:34:39.173495 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173499 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173503 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173507 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173512 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173516 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173520 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173524 | orchestrator | 2026-02-28 00:34:39.173529 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-28 00:34:39.173533 | orchestrator | Saturday 28 February 2026 00:34:31 +0000 (0:00:01.848) 0:08:03.412 ***** 2026-02-28 00:34:39.173537 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173541 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173545 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173549 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173553 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173558 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173562 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173566 | orchestrator | 2026-02-28 00:34:39.173570 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-28 00:34:39.173574 | orchestrator | Saturday 28 February 2026 00:34:32 +0000 (0:00:01.164) 0:08:04.577 ***** 2026-02-28 00:34:39.173579 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173583 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173587 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173591 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173596 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173600 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173604 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173608 | orchestrator | 2026-02-28 00:34:39.173612 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-28 00:34:39.173616 | orchestrator | 2026-02-28 00:34:39.173621 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-28 00:34:39.173625 | orchestrator | Saturday 28 February 2026 00:34:34 +0000 (0:00:01.783) 0:08:06.360 ***** 2026-02-28 00:34:39.173633 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:39.173638 | orchestrator | 2026-02-28 00:34:39.173642 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-28 00:34:39.173646 | orchestrator | Saturday 28 February 2026 00:34:35 +0000 (0:00:00.943) 0:08:07.303 ***** 2026-02-28 00:34:39.173650 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:39.173657 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:39.173661 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:39.173666 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:39.173670 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:39.173674 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:39.173678 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:39.173682 | orchestrator | 2026-02-28 00:34:39.173687 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-28 00:34:39.173691 | orchestrator | Saturday 28 February 2026 00:34:36 +0000 (0:00:00.847) 0:08:08.151 ***** 2026-02-28 00:34:39.173695 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:39.173700 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:39.173704 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:39.173708 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:39.173712 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:39.173716 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:39.173720 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:39.173724 | orchestrator | 2026-02-28 00:34:39.173729 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-28 00:34:39.173733 | orchestrator | Saturday 28 February 2026 00:34:37 +0000 (0:00:01.110) 0:08:09.262 ***** 2026-02-28 00:34:39.173737 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:34:39.173741 | orchestrator | 2026-02-28 00:34:39.173746 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-28 00:34:39.173750 | orchestrator | Saturday 28 February 2026 00:34:38 +0000 (0:00:00.985) 0:08:10.247 ***** 2026-02-28 00:34:39.173754 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:34:39.173758 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:34:39.173763 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:34:39.173767 | orchestrator | ok: [testbed-manager] 2026-02-28 00:34:39.173771 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:34:39.173775 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:34:39.173779 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:34:39.173783 | orchestrator | 2026-02-28 00:34:39.173790 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-28 00:34:40.595969 | orchestrator | Saturday 28 February 2026 00:34:39 +0000 (0:00:00.825) 0:08:11.072 ***** 2026-02-28 00:34:40.596078 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:34:40.596096 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:34:40.596109 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:34:40.596204 | orchestrator | changed: [testbed-manager] 2026-02-28 00:34:40.596216 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:34:40.596227 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:34:40.596237 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:34:40.596248 | orchestrator | 2026-02-28 00:34:40.596260 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:34:40.596272 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-28 00:34:40.596285 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:34:40.596296 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:34:40.596338 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-28 00:34:40.596350 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-28 00:34:40.596361 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-28 00:34:40.596371 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-28 00:34:40.596382 | orchestrator | 2026-02-28 00:34:40.596392 | orchestrator | 2026-02-28 00:34:40.596403 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:34:40.596414 | orchestrator | Saturday 28 February 2026 00:34:40 +0000 (0:00:01.076) 0:08:12.148 ***** 2026-02-28 00:34:40.596425 | orchestrator | =============================================================================== 2026-02-28 00:34:40.596435 | orchestrator | osism.commons.packages : Install required packages --------------------- 87.27s 2026-02-28 00:34:40.596446 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.26s 2026-02-28 00:34:40.596456 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.42s 2026-02-28 00:34:40.596467 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.02s 2026-02-28 00:34:40.596478 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.99s 2026-02-28 00:34:40.596490 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.81s 2026-02-28 00:34:40.596503 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.77s 2026-02-28 00:34:40.596516 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.42s 2026-02-28 00:34:40.596528 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.95s 2026-02-28 00:34:40.596540 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.92s 2026-02-28 00:34:40.596553 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.47s 2026-02-28 00:34:40.596579 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.18s 2026-02-28 00:34:40.596593 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.99s 2026-02-28 00:34:40.596606 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.86s 2026-02-28 00:34:40.596619 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.68s 2026-02-28 00:34:40.596631 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.14s 2026-02-28 00:34:40.596644 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.42s 2026-02-28 00:34:40.596663 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.14s 2026-02-28 00:34:40.596695 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.79s 2026-02-28 00:34:40.596715 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.62s 2026-02-28 00:34:40.896161 | orchestrator | + osism apply fail2ban 2026-02-28 00:34:53.577098 | orchestrator | 2026-02-28 00:34:53 | INFO  | Prepare task for execution of fail2ban. 2026-02-28 00:34:53.651300 | orchestrator | 2026-02-28 00:34:53 | INFO  | Task c37d866d-c267-468b-a528-6eda73e8d699 (fail2ban) was prepared for execution. 2026-02-28 00:34:53.651394 | orchestrator | 2026-02-28 00:34:53 | INFO  | It takes a moment until task c37d866d-c267-468b-a528-6eda73e8d699 (fail2ban) has been started and output is visible here. 2026-02-28 00:35:14.171736 | orchestrator | 2026-02-28 00:35:14.171830 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-28 00:35:14.171864 | orchestrator | 2026-02-28 00:35:14.171874 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-28 00:35:14.171883 | orchestrator | Saturday 28 February 2026 00:34:57 +0000 (0:00:00.201) 0:00:00.201 ***** 2026-02-28 00:35:14.171892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:35:14.171906 | orchestrator | 2026-02-28 00:35:14.171920 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-28 00:35:14.171933 | orchestrator | Saturday 28 February 2026 00:34:58 +0000 (0:00:00.982) 0:00:01.183 ***** 2026-02-28 00:35:14.171947 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:14.171960 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:14.171973 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:14.171986 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:14.171998 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:14.172010 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:14.172024 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:14.172037 | orchestrator | 2026-02-28 00:35:14.172050 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-28 00:35:14.172063 | orchestrator | Saturday 28 February 2026 00:35:09 +0000 (0:00:10.292) 0:00:11.476 ***** 2026-02-28 00:35:14.172076 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:14.172090 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:14.172103 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:14.172117 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:14.172171 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:14.172179 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:14.172187 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:14.172195 | orchestrator | 2026-02-28 00:35:14.172203 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-28 00:35:14.172211 | orchestrator | Saturday 28 February 2026 00:35:10 +0000 (0:00:01.499) 0:00:12.975 ***** 2026-02-28 00:35:14.172219 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:14.172228 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:14.172236 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:14.172244 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:14.172251 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:14.172259 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:14.172267 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:14.172275 | orchestrator | 2026-02-28 00:35:14.172289 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-28 00:35:14.172309 | orchestrator | Saturday 28 February 2026 00:35:12 +0000 (0:00:01.510) 0:00:14.485 ***** 2026-02-28 00:35:14.172324 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:14.172337 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:14.172349 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:14.172363 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:14.172377 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:14.172389 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:14.172401 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:14.172415 | orchestrator | 2026-02-28 00:35:14.172429 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:35:14.172444 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172461 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172476 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172491 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172525 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172533 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172541 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:35:14.172548 | orchestrator | 2026-02-28 00:35:14.172556 | orchestrator | 2026-02-28 00:35:14.172564 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:35:14.172572 | orchestrator | Saturday 28 February 2026 00:35:13 +0000 (0:00:01.573) 0:00:16.059 ***** 2026-02-28 00:35:14.172580 | orchestrator | =============================================================================== 2026-02-28 00:35:14.172587 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.29s 2026-02-28 00:35:14.172607 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.57s 2026-02-28 00:35:14.172615 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.51s 2026-02-28 00:35:14.172623 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.50s 2026-02-28 00:35:14.172631 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.98s 2026-02-28 00:35:14.532243 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-28 00:35:14.532350 | orchestrator | + osism apply network 2026-02-28 00:35:26.606699 | orchestrator | 2026-02-28 00:35:26 | INFO  | Prepare task for execution of network. 2026-02-28 00:35:26.679290 | orchestrator | 2026-02-28 00:35:26 | INFO  | Task 8bf3a9ac-3aaf-439b-8e83-08534667e101 (network) was prepared for execution. 2026-02-28 00:35:26.679386 | orchestrator | 2026-02-28 00:35:26 | INFO  | It takes a moment until task 8bf3a9ac-3aaf-439b-8e83-08534667e101 (network) has been started and output is visible here. 2026-02-28 00:35:53.446462 | orchestrator | 2026-02-28 00:35:53.446591 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-28 00:35:53.446610 | orchestrator | 2026-02-28 00:35:53.446689 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-28 00:35:53.446704 | orchestrator | Saturday 28 February 2026 00:35:30 +0000 (0:00:00.195) 0:00:00.195 ***** 2026-02-28 00:35:53.446716 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.446728 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.446739 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.446750 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.446761 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.446771 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.446783 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.446794 | orchestrator | 2026-02-28 00:35:53.446805 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-28 00:35:53.446815 | orchestrator | Saturday 28 February 2026 00:35:31 +0000 (0:00:00.510) 0:00:00.705 ***** 2026-02-28 00:35:53.446829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:35:53.446842 | orchestrator | 2026-02-28 00:35:53.446854 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-28 00:35:53.446864 | orchestrator | Saturday 28 February 2026 00:35:32 +0000 (0:00:00.848) 0:00:01.554 ***** 2026-02-28 00:35:53.446875 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.446886 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.446897 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.446907 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.446918 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.446948 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.446959 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.446970 | orchestrator | 2026-02-28 00:35:53.446980 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-28 00:35:53.446991 | orchestrator | Saturday 28 February 2026 00:35:33 +0000 (0:00:01.520) 0:00:03.074 ***** 2026-02-28 00:35:53.447002 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.447012 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.447023 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.447033 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.447044 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.447054 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.447065 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.447075 | orchestrator | 2026-02-28 00:35:53.447086 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-28 00:35:53.447096 | orchestrator | Saturday 28 February 2026 00:35:35 +0000 (0:00:01.380) 0:00:04.455 ***** 2026-02-28 00:35:53.447107 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-28 00:35:53.447119 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-28 00:35:53.447130 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-28 00:35:53.447232 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-28 00:35:53.447243 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-28 00:35:53.447254 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-28 00:35:53.447266 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-28 00:35:53.447284 | orchestrator | 2026-02-28 00:35:53.447296 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-28 00:35:53.447307 | orchestrator | Saturday 28 February 2026 00:35:35 +0000 (0:00:00.848) 0:00:05.304 ***** 2026-02-28 00:35:53.447318 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 00:35:53.447330 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:35:53.447340 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:35:53.447351 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 00:35:53.447361 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 00:35:53.447372 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 00:35:53.447383 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 00:35:53.447394 | orchestrator | 2026-02-28 00:35:53.447405 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-28 00:35:53.447416 | orchestrator | Saturday 28 February 2026 00:35:39 +0000 (0:00:03.351) 0:00:08.655 ***** 2026-02-28 00:35:53.447427 | orchestrator | changed: [testbed-manager] 2026-02-28 00:35:53.447438 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:53.447448 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:53.447459 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:53.447469 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:53.447479 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:53.447490 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:53.447501 | orchestrator | 2026-02-28 00:35:53.447512 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-28 00:35:53.447522 | orchestrator | Saturday 28 February 2026 00:35:40 +0000 (0:00:01.591) 0:00:10.247 ***** 2026-02-28 00:35:53.447533 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:35:53.447544 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:35:53.447555 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 00:35:53.447576 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 00:35:53.447587 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 00:35:53.447598 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 00:35:53.447609 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 00:35:53.447620 | orchestrator | 2026-02-28 00:35:53.447631 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-28 00:35:53.447641 | orchestrator | Saturday 28 February 2026 00:35:42 +0000 (0:00:01.825) 0:00:12.073 ***** 2026-02-28 00:35:53.447661 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.447672 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.447682 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.447693 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.447703 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.447713 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.447722 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.447732 | orchestrator | 2026-02-28 00:35:53.447741 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-28 00:35:53.447769 | orchestrator | Saturday 28 February 2026 00:35:43 +0000 (0:00:01.127) 0:00:13.201 ***** 2026-02-28 00:35:53.447779 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:35:53.447789 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:35:53.447799 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:35:53.447808 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:35:53.447818 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:35:53.447827 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:35:53.447837 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:35:53.447846 | orchestrator | 2026-02-28 00:35:53.447856 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-28 00:35:53.447866 | orchestrator | Saturday 28 February 2026 00:35:44 +0000 (0:00:00.649) 0:00:13.850 ***** 2026-02-28 00:35:53.447875 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.447885 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.447894 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.447903 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.447913 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.447922 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.447931 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.447941 | orchestrator | 2026-02-28 00:35:53.447950 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-28 00:35:53.447960 | orchestrator | Saturday 28 February 2026 00:35:46 +0000 (0:00:02.229) 0:00:16.080 ***** 2026-02-28 00:35:53.447970 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:35:53.447979 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:35:53.447989 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:35:53.447998 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:35:53.448008 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:35:53.448017 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:35:53.448027 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-28 00:35:53.448038 | orchestrator | 2026-02-28 00:35:53.448048 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-28 00:35:53.448057 | orchestrator | Saturday 28 February 2026 00:35:47 +0000 (0:00:00.878) 0:00:16.958 ***** 2026-02-28 00:35:53.448067 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.448076 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:35:53.448086 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:35:53.448095 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:35:53.448104 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:35:53.448113 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:35:53.448123 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:35:53.448154 | orchestrator | 2026-02-28 00:35:53.448166 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-28 00:35:53.448175 | orchestrator | Saturday 28 February 2026 00:35:49 +0000 (0:00:01.679) 0:00:18.638 ***** 2026-02-28 00:35:53.448185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:35:53.448197 | orchestrator | 2026-02-28 00:35:53.448206 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-28 00:35:53.448216 | orchestrator | Saturday 28 February 2026 00:35:50 +0000 (0:00:01.233) 0:00:19.872 ***** 2026-02-28 00:35:53.448232 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.448242 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.448251 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.448260 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.448269 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.448279 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.448288 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.448297 | orchestrator | 2026-02-28 00:35:53.448307 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-28 00:35:53.448317 | orchestrator | Saturday 28 February 2026 00:35:51 +0000 (0:00:01.102) 0:00:20.974 ***** 2026-02-28 00:35:53.448326 | orchestrator | ok: [testbed-manager] 2026-02-28 00:35:53.448340 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:35:53.448349 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:35:53.448359 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:35:53.448368 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:35:53.448377 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:35:53.448386 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:35:53.448395 | orchestrator | 2026-02-28 00:35:53.448405 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-28 00:35:53.448414 | orchestrator | Saturday 28 February 2026 00:35:52 +0000 (0:00:00.636) 0:00:21.610 ***** 2026-02-28 00:35:53.448424 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448433 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448443 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448452 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448461 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448471 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448480 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448490 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448499 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448508 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448518 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448527 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-28 00:35:53.448536 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448546 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-28 00:35:53.448556 | orchestrator | 2026-02-28 00:35:53.448572 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-28 00:36:08.109216 | orchestrator | Saturday 28 February 2026 00:35:53 +0000 (0:00:01.193) 0:00:22.804 ***** 2026-02-28 00:36:08.109325 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:08.109342 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:08.109354 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:08.109365 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:08.109376 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:08.109387 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:08.109398 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:08.109409 | orchestrator | 2026-02-28 00:36:08.109421 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-28 00:36:08.109432 | orchestrator | Saturday 28 February 2026 00:35:54 +0000 (0:00:00.613) 0:00:23.417 ***** 2026-02-28 00:36:08.109445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-1, testbed-node-0, testbed-node-3, testbed-node-5, testbed-node-4 2026-02-28 00:36:08.109482 | orchestrator | 2026-02-28 00:36:08.109494 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-28 00:36:08.109505 | orchestrator | Saturday 28 February 2026 00:35:58 +0000 (0:00:04.414) 0:00:27.832 ***** 2026-02-28 00:36:08.109518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109541 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109607 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109760 | orchestrator | 2026-02-28 00:36:08.109780 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-28 00:36:08.109799 | orchestrator | Saturday 28 February 2026 00:36:03 +0000 (0:00:05.099) 0:00:32.932 ***** 2026-02-28 00:36:08.109820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109838 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109912 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-28 00:36:08.109985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.109996 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:08.110099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:20.020084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-28 00:36:20.020940 | orchestrator | 2026-02-28 00:36:20.020963 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-28 00:36:20.020972 | orchestrator | Saturday 28 February 2026 00:36:08 +0000 (0:00:04.643) 0:00:37.575 ***** 2026-02-28 00:36:20.020981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:20.020988 | orchestrator | 2026-02-28 00:36:20.020995 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-28 00:36:20.021001 | orchestrator | Saturday 28 February 2026 00:36:09 +0000 (0:00:01.033) 0:00:38.608 ***** 2026-02-28 00:36:20.021008 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:20.021016 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:20.021022 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:20.021028 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:20.021034 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:20.021039 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:20.021044 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:20.021049 | orchestrator | 2026-02-28 00:36:20.021054 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-28 00:36:20.021059 | orchestrator | Saturday 28 February 2026 00:36:10 +0000 (0:00:00.975) 0:00:39.584 ***** 2026-02-28 00:36:20.021065 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021071 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021076 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021081 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021087 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:20.021093 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021098 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021103 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021108 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021114 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:20.021119 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021124 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021129 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021134 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021163 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021180 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021186 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021191 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021211 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:20.021216 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021221 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021227 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021232 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021237 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:20.021242 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021248 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021253 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021258 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021263 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:20.021268 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:20.021273 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-28 00:36:20.021278 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-28 00:36:20.021283 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-28 00:36:20.021288 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-28 00:36:20.021293 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:20.021298 | orchestrator | 2026-02-28 00:36:20.021303 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-02-28 00:36:20.021323 | orchestrator | Saturday 28 February 2026 00:36:10 +0000 (0:00:00.774) 0:00:40.359 ***** 2026-02-28 00:36:20.021328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:36:20.021334 | orchestrator | 2026-02-28 00:36:20.021339 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-02-28 00:36:20.021344 | orchestrator | Saturday 28 February 2026 00:36:12 +0000 (0:00:01.091) 0:00:41.451 ***** 2026-02-28 00:36:20.021349 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:20.021354 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:20.021359 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:20.021364 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:20.021369 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:20.021374 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:20.021379 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:20.021384 | orchestrator | 2026-02-28 00:36:20.021389 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-02-28 00:36:20.021394 | orchestrator | Saturday 28 February 2026 00:36:12 +0000 (0:00:00.520) 0:00:41.972 ***** 2026-02-28 00:36:20.021399 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:20.021404 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:20.021409 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:20.021414 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:20.021419 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:20.021424 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:20.021429 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:20.021434 | orchestrator | 2026-02-28 00:36:20.021439 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-02-28 00:36:20.021444 | orchestrator | Saturday 28 February 2026 00:36:13 +0000 (0:00:00.644) 0:00:42.617 ***** 2026-02-28 00:36:20.021449 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:20.021459 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:20.021464 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:20.021469 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:20.021474 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:20.021479 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:20.021484 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:20.021489 | orchestrator | 2026-02-28 00:36:20.021494 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-02-28 00:36:20.021499 | orchestrator | Saturday 28 February 2026 00:36:13 +0000 (0:00:00.518) 0:00:43.136 ***** 2026-02-28 00:36:20.021504 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:20.021509 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:20.021514 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:20.021519 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:20.021524 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:20.021529 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:20.021534 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:20.021539 | orchestrator | 2026-02-28 00:36:20.021544 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-02-28 00:36:20.021549 | orchestrator | Saturday 28 February 2026 00:36:15 +0000 (0:00:01.683) 0:00:44.820 ***** 2026-02-28 00:36:20.021554 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:20.021559 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:20.021564 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:20.021569 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:20.021574 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:20.021579 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:20.021584 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:20.021589 | orchestrator | 2026-02-28 00:36:20.021594 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-02-28 00:36:20.021603 | orchestrator | Saturday 28 February 2026 00:36:16 +0000 (0:00:00.939) 0:00:45.760 ***** 2026-02-28 00:36:20.021608 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:20.021613 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:36:20.021618 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:36:20.021623 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:36:20.021628 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:36:20.021633 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:36:20.021638 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:36:20.021643 | orchestrator | 2026-02-28 00:36:20.021648 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-28 00:36:20.021653 | orchestrator | Saturday 28 February 2026 00:36:18 +0000 (0:00:02.292) 0:00:48.053 ***** 2026-02-28 00:36:20.021658 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:20.021663 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:20.021668 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:20.021673 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:20.021678 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:20.021683 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:20.021688 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:20.021693 | orchestrator | 2026-02-28 00:36:20.021698 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-28 00:36:20.021703 | orchestrator | Saturday 28 February 2026 00:36:19 +0000 (0:00:00.799) 0:00:48.852 ***** 2026-02-28 00:36:20.021708 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:36:20.021713 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:36:20.021718 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:36:20.021723 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:36:20.021728 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:36:20.021733 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:36:20.021738 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:36:20.021743 | orchestrator | 2026-02-28 00:36:20.021748 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:36:20.021754 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 00:36:20.021764 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:36:20.021772 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:36:20.343267 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:36:20.343369 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:36:20.343385 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:36:20.343397 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 00:36:20.343408 | orchestrator | 2026-02-28 00:36:20.343420 | orchestrator | 2026-02-28 00:36:20.343431 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:36:20.343444 | orchestrator | Saturday 28 February 2026 00:36:20 +0000 (0:00:00.526) 0:00:49.379 ***** 2026-02-28 00:36:20.343454 | orchestrator | =============================================================================== 2026-02-28 00:36:20.343465 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.10s 2026-02-28 00:36:20.343476 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.64s 2026-02-28 00:36:20.343486 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.41s 2026-02-28 00:36:20.343497 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.35s 2026-02-28 00:36:20.343508 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.29s 2026-02-28 00:36:20.343518 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.23s 2026-02-28 00:36:20.343529 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.83s 2026-02-28 00:36:20.343539 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.68s 2026-02-28 00:36:20.343550 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2026-02-28 00:36:20.343561 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2026-02-28 00:36:20.343571 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.52s 2026-02-28 00:36:20.343582 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.38s 2026-02-28 00:36:20.343592 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2026-02-28 00:36:20.343603 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2026-02-28 00:36:20.343613 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2026-02-28 00:36:20.343624 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-02-28 00:36:20.343634 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.09s 2026-02-28 00:36:20.343660 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.03s 2026-02-28 00:36:20.343673 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2026-02-28 00:36:20.343683 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 0.94s 2026-02-28 00:36:20.647248 | orchestrator | + osism apply wireguard 2026-02-28 00:36:32.680011 | orchestrator | 2026-02-28 00:36:32 | INFO  | Prepare task for execution of wireguard. 2026-02-28 00:36:32.770969 | orchestrator | 2026-02-28 00:36:32 | INFO  | Task 57e13723-2f21-47df-b8e9-73f6c83e6c0f (wireguard) was prepared for execution. 2026-02-28 00:36:32.771106 | orchestrator | 2026-02-28 00:36:32 | INFO  | It takes a moment until task 57e13723-2f21-47df-b8e9-73f6c83e6c0f (wireguard) has been started and output is visible here. 2026-02-28 00:36:50.771188 | orchestrator | 2026-02-28 00:36:50.771283 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-28 00:36:50.771297 | orchestrator | 2026-02-28 00:36:50.771307 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-28 00:36:50.771316 | orchestrator | Saturday 28 February 2026 00:36:36 +0000 (0:00:00.166) 0:00:00.166 ***** 2026-02-28 00:36:50.771324 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:50.771333 | orchestrator | 2026-02-28 00:36:50.771341 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-28 00:36:50.771349 | orchestrator | Saturday 28 February 2026 00:36:38 +0000 (0:00:01.167) 0:00:01.333 ***** 2026-02-28 00:36:50.771357 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771366 | orchestrator | 2026-02-28 00:36:50.771374 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-28 00:36:50.771382 | orchestrator | Saturday 28 February 2026 00:36:43 +0000 (0:00:05.362) 0:00:06.696 ***** 2026-02-28 00:36:50.771390 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771397 | orchestrator | 2026-02-28 00:36:50.771405 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-28 00:36:50.771413 | orchestrator | Saturday 28 February 2026 00:36:43 +0000 (0:00:00.534) 0:00:07.230 ***** 2026-02-28 00:36:50.771421 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771428 | orchestrator | 2026-02-28 00:36:50.771436 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-28 00:36:50.771444 | orchestrator | Saturday 28 February 2026 00:36:44 +0000 (0:00:00.427) 0:00:07.657 ***** 2026-02-28 00:36:50.771452 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:50.771460 | orchestrator | 2026-02-28 00:36:50.771467 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-28 00:36:50.771475 | orchestrator | Saturday 28 February 2026 00:36:45 +0000 (0:00:00.658) 0:00:08.316 ***** 2026-02-28 00:36:50.771483 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:50.771491 | orchestrator | 2026-02-28 00:36:50.771499 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-28 00:36:50.771506 | orchestrator | Saturday 28 February 2026 00:36:45 +0000 (0:00:00.397) 0:00:08.713 ***** 2026-02-28 00:36:50.771514 | orchestrator | ok: [testbed-manager] 2026-02-28 00:36:50.771522 | orchestrator | 2026-02-28 00:36:50.771530 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-28 00:36:50.771538 | orchestrator | Saturday 28 February 2026 00:36:45 +0000 (0:00:00.408) 0:00:09.122 ***** 2026-02-28 00:36:50.771545 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771554 | orchestrator | 2026-02-28 00:36:50.771562 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-28 00:36:50.771570 | orchestrator | Saturday 28 February 2026 00:36:46 +0000 (0:00:01.135) 0:00:10.257 ***** 2026-02-28 00:36:50.771578 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-28 00:36:50.771586 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771594 | orchestrator | 2026-02-28 00:36:50.771601 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-28 00:36:50.771609 | orchestrator | Saturday 28 February 2026 00:36:47 +0000 (0:00:00.919) 0:00:11.177 ***** 2026-02-28 00:36:50.771617 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771625 | orchestrator | 2026-02-28 00:36:50.771633 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-28 00:36:50.771641 | orchestrator | Saturday 28 February 2026 00:36:49 +0000 (0:00:01.623) 0:00:12.800 ***** 2026-02-28 00:36:50.771648 | orchestrator | changed: [testbed-manager] 2026-02-28 00:36:50.771656 | orchestrator | 2026-02-28 00:36:50.771664 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:36:50.771709 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:36:50.771721 | orchestrator | 2026-02-28 00:36:50.771730 | orchestrator | 2026-02-28 00:36:50.771739 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:36:50.771748 | orchestrator | Saturday 28 February 2026 00:36:50 +0000 (0:00:00.891) 0:00:13.692 ***** 2026-02-28 00:36:50.771757 | orchestrator | =============================================================================== 2026-02-28 00:36:50.771766 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.36s 2026-02-28 00:36:50.771775 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.62s 2026-02-28 00:36:50.771783 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.17s 2026-02-28 00:36:50.771792 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2026-02-28 00:36:50.771801 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-02-28 00:36:50.771810 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2026-02-28 00:36:50.771819 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.66s 2026-02-28 00:36:50.771828 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-02-28 00:36:50.771837 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-02-28 00:36:50.771850 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-02-28 00:36:50.771860 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-02-28 00:36:51.064591 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-28 00:36:51.099756 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-28 00:36:51.099836 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-28 00:36:51.172748 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 208 0 --:--:-- --:--:-- --:--:-- 211 2026-02-28 00:36:51.186443 | orchestrator | + osism apply --environment custom workarounds 2026-02-28 00:36:53.151052 | orchestrator | 2026-02-28 00:36:53 | INFO  | Trying to run play workarounds in environment custom 2026-02-28 00:37:03.162672 | orchestrator | 2026-02-28 00:37:03 | INFO  | Prepare task for execution of workarounds. 2026-02-28 00:37:03.247098 | orchestrator | 2026-02-28 00:37:03 | INFO  | Task f1510748-5191-4ec1-8b66-d3a1c4a43222 (workarounds) was prepared for execution. 2026-02-28 00:37:03.247255 | orchestrator | 2026-02-28 00:37:03 | INFO  | It takes a moment until task f1510748-5191-4ec1-8b66-d3a1c4a43222 (workarounds) has been started and output is visible here. 2026-02-28 00:37:28.254235 | orchestrator | 2026-02-28 00:37:28.254387 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:37:28.254415 | orchestrator | 2026-02-28 00:37:28.254435 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-28 00:37:28.254455 | orchestrator | Saturday 28 February 2026 00:37:07 +0000 (0:00:00.128) 0:00:00.128 ***** 2026-02-28 00:37:28.254474 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254494 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254512 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254530 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254549 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254567 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254586 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-28 00:37:28.254640 | orchestrator | 2026-02-28 00:37:28.254661 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-28 00:37:28.254680 | orchestrator | 2026-02-28 00:37:28.254699 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-28 00:37:28.254718 | orchestrator | Saturday 28 February 2026 00:37:08 +0000 (0:00:00.689) 0:00:00.817 ***** 2026-02-28 00:37:28.254736 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:28.254757 | orchestrator | 2026-02-28 00:37:28.254776 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-28 00:37:28.254794 | orchestrator | 2026-02-28 00:37:28.254813 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-28 00:37:28.254832 | orchestrator | Saturday 28 February 2026 00:37:10 +0000 (0:00:02.236) 0:00:03.054 ***** 2026-02-28 00:37:28.254849 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:28.254868 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:28.254886 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:28.254904 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:28.254917 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:28.254929 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:28.254941 | orchestrator | 2026-02-28 00:37:28.254953 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-28 00:37:28.254965 | orchestrator | 2026-02-28 00:37:28.254977 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-28 00:37:28.254993 | orchestrator | Saturday 28 February 2026 00:37:12 +0000 (0:00:01.842) 0:00:04.897 ***** 2026-02-28 00:37:28.255012 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:28.255031 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:28.255050 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:28.255068 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:28.255085 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:28.255103 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-28 00:37:28.255122 | orchestrator | 2026-02-28 00:37:28.255140 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-28 00:37:28.255330 | orchestrator | Saturday 28 February 2026 00:37:13 +0000 (0:00:01.548) 0:00:06.445 ***** 2026-02-28 00:37:28.255355 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:28.255366 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:28.255388 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:28.255398 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:28.255407 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:28.255417 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:28.255426 | orchestrator | 2026-02-28 00:37:28.255436 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-28 00:37:28.255446 | orchestrator | Saturday 28 February 2026 00:37:17 +0000 (0:00:03.739) 0:00:10.185 ***** 2026-02-28 00:37:28.255456 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:28.255482 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:28.255492 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:28.255501 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:28.255511 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:28.255520 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:28.255530 | orchestrator | 2026-02-28 00:37:28.255539 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-28 00:37:28.255549 | orchestrator | 2026-02-28 00:37:28.255558 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-28 00:37:28.255568 | orchestrator | Saturday 28 February 2026 00:37:18 +0000 (0:00:00.742) 0:00:10.928 ***** 2026-02-28 00:37:28.255591 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:28.255601 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:28.255615 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:28.255631 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:28.255647 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:28.255662 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:28.255678 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:28.255693 | orchestrator | 2026-02-28 00:37:28.255710 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-28 00:37:28.255720 | orchestrator | Saturday 28 February 2026 00:37:19 +0000 (0:00:01.667) 0:00:12.596 ***** 2026-02-28 00:37:28.255729 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:28.255739 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:28.255748 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:28.255758 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:28.255767 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:28.255777 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:28.255809 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:28.255820 | orchestrator | 2026-02-28 00:37:28.255829 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-28 00:37:28.255839 | orchestrator | Saturday 28 February 2026 00:37:21 +0000 (0:00:01.736) 0:00:14.332 ***** 2026-02-28 00:37:28.255849 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:28.255858 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:28.255868 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:28.255877 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:28.255887 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:28.255896 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:28.255906 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:28.255915 | orchestrator | 2026-02-28 00:37:28.255925 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-28 00:37:28.255935 | orchestrator | Saturday 28 February 2026 00:37:23 +0000 (0:00:01.524) 0:00:15.856 ***** 2026-02-28 00:37:28.255944 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:28.255954 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:28.255964 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:28.255973 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:28.255983 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:28.255992 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:28.256002 | orchestrator | changed: [testbed-manager] 2026-02-28 00:37:28.256011 | orchestrator | 2026-02-28 00:37:28.256021 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-28 00:37:28.256031 | orchestrator | Saturday 28 February 2026 00:37:24 +0000 (0:00:01.799) 0:00:17.656 ***** 2026-02-28 00:37:28.256040 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:28.256050 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:28.256059 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:28.256080 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:28.256089 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:28.256099 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:28.256108 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:37:28.256118 | orchestrator | 2026-02-28 00:37:28.256128 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-28 00:37:28.256137 | orchestrator | 2026-02-28 00:37:28.256147 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-28 00:37:28.256198 | orchestrator | Saturday 28 February 2026 00:37:25 +0000 (0:00:00.597) 0:00:18.253 ***** 2026-02-28 00:37:28.256209 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:37:28.256219 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:37:28.256228 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:37:28.256238 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:37:28.256247 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:37:28.256257 | orchestrator | ok: [testbed-manager] 2026-02-28 00:37:28.256275 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:37:28.256284 | orchestrator | 2026-02-28 00:37:28.256294 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:37:28.256305 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:37:28.256320 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:28.256336 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:28.256349 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:28.256358 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:28.256368 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:28.256377 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:28.256387 | orchestrator | 2026-02-28 00:37:28.256396 | orchestrator | 2026-02-28 00:37:28.256413 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:37:28.256423 | orchestrator | Saturday 28 February 2026 00:37:28 +0000 (0:00:02.782) 0:00:21.036 ***** 2026-02-28 00:37:28.256433 | orchestrator | =============================================================================== 2026-02-28 00:37:28.256442 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.74s 2026-02-28 00:37:28.256452 | orchestrator | Install python3-docker -------------------------------------------------- 2.78s 2026-02-28 00:37:28.256461 | orchestrator | Apply netplan configuration --------------------------------------------- 2.24s 2026-02-28 00:37:28.256471 | orchestrator | Apply netplan configuration --------------------------------------------- 1.84s 2026-02-28 00:37:28.256480 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.80s 2026-02-28 00:37:28.256490 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.74s 2026-02-28 00:37:28.256499 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.67s 2026-02-28 00:37:28.256508 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2026-02-28 00:37:28.256518 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-02-28 00:37:28.256527 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2026-02-28 00:37:28.256537 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.69s 2026-02-28 00:37:28.256553 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2026-02-28 00:37:28.822796 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-28 00:37:40.824206 | orchestrator | 2026-02-28 00:37:40 | INFO  | Prepare task for execution of reboot. 2026-02-28 00:37:40.892662 | orchestrator | 2026-02-28 00:37:40 | INFO  | Task 5af2f31b-dd87-4d9c-955c-2fdaf0d41e0b (reboot) was prepared for execution. 2026-02-28 00:37:40.892758 | orchestrator | 2026-02-28 00:37:40 | INFO  | It takes a moment until task 5af2f31b-dd87-4d9c-955c-2fdaf0d41e0b (reboot) has been started and output is visible here. 2026-02-28 00:37:50.479208 | orchestrator | 2026-02-28 00:37:50.479323 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:37:50.479341 | orchestrator | 2026-02-28 00:37:50.479353 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:37:50.479390 | orchestrator | Saturday 28 February 2026 00:37:45 +0000 (0:00:00.169) 0:00:00.169 ***** 2026-02-28 00:37:50.479401 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:50.479413 | orchestrator | 2026-02-28 00:37:50.479424 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:37:50.479435 | orchestrator | Saturday 28 February 2026 00:37:45 +0000 (0:00:00.106) 0:00:00.275 ***** 2026-02-28 00:37:50.479446 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:37:50.479457 | orchestrator | 2026-02-28 00:37:50.479467 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:37:50.479478 | orchestrator | Saturday 28 February 2026 00:37:46 +0000 (0:00:00.829) 0:00:01.105 ***** 2026-02-28 00:37:50.479489 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:37:50.479499 | orchestrator | 2026-02-28 00:37:50.479510 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:37:50.479521 | orchestrator | 2026-02-28 00:37:50.479532 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:37:50.479542 | orchestrator | Saturday 28 February 2026 00:37:46 +0000 (0:00:00.094) 0:00:01.199 ***** 2026-02-28 00:37:50.479553 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:50.479564 | orchestrator | 2026-02-28 00:37:50.479574 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:37:50.479585 | orchestrator | Saturday 28 February 2026 00:37:46 +0000 (0:00:00.084) 0:00:01.284 ***** 2026-02-28 00:37:50.479595 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:37:50.479606 | orchestrator | 2026-02-28 00:37:50.479617 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:37:50.479627 | orchestrator | Saturday 28 February 2026 00:37:46 +0000 (0:00:00.575) 0:00:01.859 ***** 2026-02-28 00:37:50.479638 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:37:50.479649 | orchestrator | 2026-02-28 00:37:50.479660 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:37:50.479671 | orchestrator | 2026-02-28 00:37:50.479681 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:37:50.479692 | orchestrator | Saturday 28 February 2026 00:37:46 +0000 (0:00:00.094) 0:00:01.954 ***** 2026-02-28 00:37:50.479703 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:50.479713 | orchestrator | 2026-02-28 00:37:50.479726 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:37:50.479738 | orchestrator | Saturday 28 February 2026 00:37:47 +0000 (0:00:00.164) 0:00:02.118 ***** 2026-02-28 00:37:50.479751 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:37:50.479763 | orchestrator | 2026-02-28 00:37:50.479775 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:37:50.479788 | orchestrator | Saturday 28 February 2026 00:37:47 +0000 (0:00:00.616) 0:00:02.735 ***** 2026-02-28 00:37:50.479800 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:37:50.479812 | orchestrator | 2026-02-28 00:37:50.479825 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:37:50.479837 | orchestrator | 2026-02-28 00:37:50.479850 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:37:50.479862 | orchestrator | Saturday 28 February 2026 00:37:47 +0000 (0:00:00.120) 0:00:02.856 ***** 2026-02-28 00:37:50.479874 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:50.479887 | orchestrator | 2026-02-28 00:37:50.479899 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:37:50.479926 | orchestrator | Saturday 28 February 2026 00:37:47 +0000 (0:00:00.096) 0:00:02.952 ***** 2026-02-28 00:37:50.479939 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:37:50.479951 | orchestrator | 2026-02-28 00:37:50.479964 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:37:50.479976 | orchestrator | Saturday 28 February 2026 00:37:48 +0000 (0:00:00.616) 0:00:03.569 ***** 2026-02-28 00:37:50.479988 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:37:50.480035 | orchestrator | 2026-02-28 00:37:50.480047 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:37:50.480060 | orchestrator | 2026-02-28 00:37:50.480072 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:37:50.480085 | orchestrator | Saturday 28 February 2026 00:37:48 +0000 (0:00:00.100) 0:00:03.669 ***** 2026-02-28 00:37:50.480098 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:50.480109 | orchestrator | 2026-02-28 00:37:50.480120 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:37:50.480130 | orchestrator | Saturday 28 February 2026 00:37:48 +0000 (0:00:00.080) 0:00:03.750 ***** 2026-02-28 00:37:50.480141 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:37:50.480186 | orchestrator | 2026-02-28 00:37:50.480199 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:37:50.480210 | orchestrator | Saturday 28 February 2026 00:37:49 +0000 (0:00:00.610) 0:00:04.361 ***** 2026-02-28 00:37:50.480221 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:37:50.480232 | orchestrator | 2026-02-28 00:37:50.480242 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-28 00:37:50.480253 | orchestrator | 2026-02-28 00:37:50.480264 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-28 00:37:50.480275 | orchestrator | Saturday 28 February 2026 00:37:49 +0000 (0:00:00.120) 0:00:04.482 ***** 2026-02-28 00:37:50.480285 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:50.480296 | orchestrator | 2026-02-28 00:37:50.480306 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-28 00:37:50.480317 | orchestrator | Saturday 28 February 2026 00:37:49 +0000 (0:00:00.097) 0:00:04.580 ***** 2026-02-28 00:37:50.480328 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:37:50.480338 | orchestrator | 2026-02-28 00:37:50.480349 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-28 00:37:50.480360 | orchestrator | Saturday 28 February 2026 00:37:50 +0000 (0:00:00.659) 0:00:05.240 ***** 2026-02-28 00:37:50.480390 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:37:50.480402 | orchestrator | 2026-02-28 00:37:50.480412 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:37:50.480424 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:50.480437 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:50.480448 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:50.480458 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:50.480469 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:50.480480 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:37:50.480491 | orchestrator | 2026-02-28 00:37:50.480501 | orchestrator | 2026-02-28 00:37:50.480512 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:37:50.480523 | orchestrator | Saturday 28 February 2026 00:37:50 +0000 (0:00:00.038) 0:00:05.279 ***** 2026-02-28 00:37:50.480534 | orchestrator | =============================================================================== 2026-02-28 00:37:50.480544 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 3.91s 2026-02-28 00:37:50.480555 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-02-28 00:37:50.480566 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2026-02-28 00:37:50.771141 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-28 00:38:02.862908 | orchestrator | 2026-02-28 00:38:02 | INFO  | Prepare task for execution of wait-for-connection. 2026-02-28 00:38:02.946752 | orchestrator | 2026-02-28 00:38:02 | INFO  | Task a3972351-ca41-4f35-aa6a-7ea78fae7d2b (wait-for-connection) was prepared for execution. 2026-02-28 00:38:02.946839 | orchestrator | 2026-02-28 00:38:02 | INFO  | It takes a moment until task a3972351-ca41-4f35-aa6a-7ea78fae7d2b (wait-for-connection) has been started and output is visible here. 2026-02-28 00:38:19.730346 | orchestrator | 2026-02-28 00:38:19.730500 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-28 00:38:19.730544 | orchestrator | 2026-02-28 00:38:19.730565 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-28 00:38:19.730584 | orchestrator | Saturday 28 February 2026 00:38:07 +0000 (0:00:00.231) 0:00:00.231 ***** 2026-02-28 00:38:19.730602 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:19.730635 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:19.730652 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:19.730670 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:19.730688 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:19.730727 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:19.730747 | orchestrator | 2026-02-28 00:38:19.730764 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:38:19.730783 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:19.730801 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:19.730819 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:19.730836 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:19.730855 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:19.730874 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:19.730893 | orchestrator | 2026-02-28 00:38:19.730913 | orchestrator | 2026-02-28 00:38:19.730931 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:38:19.730950 | orchestrator | Saturday 28 February 2026 00:38:19 +0000 (0:00:11.614) 0:00:11.846 ***** 2026-02-28 00:38:19.730969 | orchestrator | =============================================================================== 2026-02-28 00:38:19.730988 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2026-02-28 00:38:20.060637 | orchestrator | + osism apply hddtemp 2026-02-28 00:38:32.240310 | orchestrator | 2026-02-28 00:38:32 | INFO  | Prepare task for execution of hddtemp. 2026-02-28 00:38:32.310526 | orchestrator | 2026-02-28 00:38:32 | INFO  | Task c5177648-ca43-40fc-8107-cc1107b607a9 (hddtemp) was prepared for execution. 2026-02-28 00:38:32.310629 | orchestrator | 2026-02-28 00:38:32 | INFO  | It takes a moment until task c5177648-ca43-40fc-8107-cc1107b607a9 (hddtemp) has been started and output is visible here. 2026-02-28 00:38:59.035581 | orchestrator | 2026-02-28 00:38:59.035691 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-28 00:38:59.035709 | orchestrator | 2026-02-28 00:38:59.035722 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-28 00:38:59.035733 | orchestrator | Saturday 28 February 2026 00:38:36 +0000 (0:00:00.255) 0:00:00.255 ***** 2026-02-28 00:38:59.035744 | orchestrator | ok: [testbed-manager] 2026-02-28 00:38:59.035778 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:59.035790 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:59.035800 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:59.035811 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:59.035822 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:59.035833 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:59.035844 | orchestrator | 2026-02-28 00:38:59.035855 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-28 00:38:59.035866 | orchestrator | Saturday 28 February 2026 00:38:37 +0000 (0:00:00.772) 0:00:01.027 ***** 2026-02-28 00:38:59.035878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:38:59.035892 | orchestrator | 2026-02-28 00:38:59.035903 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-28 00:38:59.035913 | orchestrator | Saturday 28 February 2026 00:38:38 +0000 (0:00:01.169) 0:00:02.196 ***** 2026-02-28 00:38:59.035924 | orchestrator | ok: [testbed-manager] 2026-02-28 00:38:59.035934 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:59.035945 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:59.035955 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:59.035966 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:59.035976 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:59.035986 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:59.035997 | orchestrator | 2026-02-28 00:38:59.036007 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-28 00:38:59.036018 | orchestrator | Saturday 28 February 2026 00:38:40 +0000 (0:00:01.906) 0:00:04.103 ***** 2026-02-28 00:38:59.036029 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:59.036040 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:59.036051 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:59.036061 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:59.036072 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:59.036082 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:59.036093 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:59.036103 | orchestrator | 2026-02-28 00:38:59.036114 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-28 00:38:59.036171 | orchestrator | Saturday 28 February 2026 00:38:41 +0000 (0:00:01.202) 0:00:05.305 ***** 2026-02-28 00:38:59.036185 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:38:59.036197 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:38:59.036209 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:38:59.036222 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:38:59.036235 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:38:59.036247 | orchestrator | ok: [testbed-manager] 2026-02-28 00:38:59.036259 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:38:59.036272 | orchestrator | 2026-02-28 00:38:59.036284 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-28 00:38:59.036297 | orchestrator | Saturday 28 February 2026 00:38:42 +0000 (0:00:01.147) 0:00:06.452 ***** 2026-02-28 00:38:59.036309 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:38:59.036321 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:38:59.036333 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:59.036346 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:38:59.036374 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:38:59.036386 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:38:59.036399 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:38:59.036411 | orchestrator | 2026-02-28 00:38:59.036423 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-28 00:38:59.036435 | orchestrator | Saturday 28 February 2026 00:38:43 +0000 (0:00:00.795) 0:00:07.248 ***** 2026-02-28 00:38:59.036447 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:59.036460 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:59.036481 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:59.036492 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:59.036503 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:59.036513 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:59.036524 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:59.036534 | orchestrator | 2026-02-28 00:38:59.036545 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-28 00:38:59.036556 | orchestrator | Saturday 28 February 2026 00:38:55 +0000 (0:00:12.012) 0:00:19.261 ***** 2026-02-28 00:38:59.036567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:38:59.036578 | orchestrator | 2026-02-28 00:38:59.036589 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-28 00:38:59.036600 | orchestrator | Saturday 28 February 2026 00:38:56 +0000 (0:00:01.315) 0:00:20.577 ***** 2026-02-28 00:38:59.036611 | orchestrator | changed: [testbed-manager] 2026-02-28 00:38:59.036621 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:38:59.036632 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:38:59.036643 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:38:59.036653 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:38:59.036664 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:38:59.036675 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:38:59.036685 | orchestrator | 2026-02-28 00:38:59.036696 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:38:59.036707 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:38:59.036737 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:59.036749 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:59.036761 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:59.036771 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:59.036782 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:59.036793 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:38:59.036803 | orchestrator | 2026-02-28 00:38:59.036814 | orchestrator | 2026-02-28 00:38:59.036825 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:38:59.036836 | orchestrator | Saturday 28 February 2026 00:38:58 +0000 (0:00:01.863) 0:00:22.440 ***** 2026-02-28 00:38:59.036846 | orchestrator | =============================================================================== 2026-02-28 00:38:59.036857 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.01s 2026-02-28 00:38:59.036868 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2026-02-28 00:38:59.036878 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.86s 2026-02-28 00:38:59.036889 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.32s 2026-02-28 00:38:59.036899 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.20s 2026-02-28 00:38:59.036910 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2026-02-28 00:38:59.036927 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2026-02-28 00:38:59.036938 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2026-02-28 00:38:59.036948 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.77s 2026-02-28 00:38:59.380982 | orchestrator | ++ semver latest 7.1.1 2026-02-28 00:38:59.441299 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:38:59.441381 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:38:59.441389 | orchestrator | + sudo systemctl restart manager.service 2026-02-28 00:39:12.749804 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-28 00:39:12.749913 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-28 00:39:12.749927 | orchestrator | + local max_attempts=60 2026-02-28 00:39:12.749938 | orchestrator | + local name=ceph-ansible 2026-02-28 00:39:12.749947 | orchestrator | + local attempt_num=1 2026-02-28 00:39:12.749957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:12.789873 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:12.789969 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:12.789985 | orchestrator | + sleep 5 2026-02-28 00:39:17.794402 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:17.811691 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:17.811790 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:17.811804 | orchestrator | + sleep 5 2026-02-28 00:39:22.815384 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:22.852965 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:22.853049 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:22.853063 | orchestrator | + sleep 5 2026-02-28 00:39:27.857263 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:27.889993 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:27.890258 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:27.890279 | orchestrator | + sleep 5 2026-02-28 00:39:32.895227 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:32.933693 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:32.933771 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:32.933783 | orchestrator | + sleep 5 2026-02-28 00:39:37.938701 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:37.979770 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:37.979876 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:37.979891 | orchestrator | + sleep 5 2026-02-28 00:39:42.985009 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:43.023170 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:43.023228 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:43.023236 | orchestrator | + sleep 5 2026-02-28 00:39:48.027591 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:48.059563 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:48.059643 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:48.059658 | orchestrator | + sleep 5 2026-02-28 00:39:53.064652 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:53.083995 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:53.084067 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:53.084080 | orchestrator | + sleep 5 2026-02-28 00:39:58.087277 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:39:58.125817 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:39:58.125898 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:39:58.125915 | orchestrator | + sleep 5 2026-02-28 00:40:03.130953 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:03.166597 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:03.166698 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:03.166714 | orchestrator | + sleep 5 2026-02-28 00:40:08.172764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:08.211073 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:08.211219 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:08.211266 | orchestrator | + sleep 5 2026-02-28 00:40:13.215281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:13.255369 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:13.255436 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-28 00:40:13.255449 | orchestrator | + sleep 5 2026-02-28 00:40:18.261234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-28 00:40:18.301408 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:18.301506 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-28 00:40:18.301527 | orchestrator | + local max_attempts=60 2026-02-28 00:40:18.301545 | orchestrator | + local name=kolla-ansible 2026-02-28 00:40:18.301561 | orchestrator | + local attempt_num=1 2026-02-28 00:40:18.301579 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-28 00:40:18.341462 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:18.341558 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-28 00:40:18.341572 | orchestrator | + local max_attempts=60 2026-02-28 00:40:18.341583 | orchestrator | + local name=osism-ansible 2026-02-28 00:40:18.341593 | orchestrator | + local attempt_num=1 2026-02-28 00:40:18.342622 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-28 00:40:18.380274 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-28 00:40:18.380375 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-28 00:40:18.380390 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-28 00:40:18.561114 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-28 00:40:18.715499 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-28 00:40:18.878484 | orchestrator | ARA in osism-ansible already disabled. 2026-02-28 00:40:19.045070 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-28 00:40:19.045503 | orchestrator | + osism apply gather-facts 2026-02-28 00:40:31.122169 | orchestrator | 2026-02-28 00:40:31 | INFO  | Prepare task for execution of gather-facts. 2026-02-28 00:40:31.191668 | orchestrator | 2026-02-28 00:40:31 | INFO  | Task 6b8c5b5d-9d20-47ea-af3f-07ab8672bc5e (gather-facts) was prepared for execution. 2026-02-28 00:40:31.191792 | orchestrator | 2026-02-28 00:40:31 | INFO  | It takes a moment until task 6b8c5b5d-9d20-47ea-af3f-07ab8672bc5e (gather-facts) has been started and output is visible here. 2026-02-28 00:40:44.801020 | orchestrator | 2026-02-28 00:40:44.801187 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:40:44.801207 | orchestrator | 2026-02-28 00:40:44.801220 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:40:44.801232 | orchestrator | Saturday 28 February 2026 00:40:35 +0000 (0:00:00.163) 0:00:00.163 ***** 2026-02-28 00:40:44.801243 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:40:44.801256 | orchestrator | ok: [testbed-manager] 2026-02-28 00:40:44.801267 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:40:44.801278 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:40:44.801289 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:40:44.801300 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:40:44.801310 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:40:44.801321 | orchestrator | 2026-02-28 00:40:44.801333 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:40:44.801344 | orchestrator | 2026-02-28 00:40:44.801355 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:40:44.801366 | orchestrator | Saturday 28 February 2026 00:40:43 +0000 (0:00:08.472) 0:00:08.635 ***** 2026-02-28 00:40:44.801377 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:40:44.801389 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:40:44.801400 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:40:44.801411 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:40:44.801422 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:40:44.801433 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:40:44.801444 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:40:44.801455 | orchestrator | 2026-02-28 00:40:44.801466 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:40:44.801477 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801514 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801527 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801558 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801571 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801584 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801597 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 00:40:44.801609 | orchestrator | 2026-02-28 00:40:44.801622 | orchestrator | 2026-02-28 00:40:44.801635 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:40:44.801648 | orchestrator | Saturday 28 February 2026 00:40:44 +0000 (0:00:00.627) 0:00:09.263 ***** 2026-02-28 00:40:44.801660 | orchestrator | =============================================================================== 2026-02-28 00:40:44.801673 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.47s 2026-02-28 00:40:44.801685 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-28 00:40:45.136526 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-28 00:40:45.154468 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-28 00:40:45.167751 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-28 00:40:45.180491 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-28 00:40:45.197940 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-28 00:40:45.211406 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-28 00:40:45.223639 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-28 00:40:45.241328 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-28 00:40:45.253017 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-28 00:40:45.268321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-28 00:40:45.281672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-28 00:40:45.299138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-28 00:40:45.315868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-28 00:40:45.335813 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-28 00:40:45.353928 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-28 00:40:45.369999 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-28 00:40:45.387575 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-28 00:40:45.402010 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-28 00:40:45.415128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-28 00:40:45.434550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-28 00:40:45.450303 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-28 00:40:45.475617 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-28 00:40:45.496233 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-28 00:40:45.513521 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-28 00:40:45.777451 | orchestrator | ok: Runtime: 0:23:44.644228 2026-02-28 00:40:45.889568 | 2026-02-28 00:40:45.889711 | TASK [Deploy services] 2026-02-28 00:40:46.422376 | orchestrator | skipping: Conditional result was False 2026-02-28 00:40:46.440146 | 2026-02-28 00:40:46.440334 | TASK [Deploy in a nutshell] 2026-02-28 00:40:47.224126 | orchestrator | + set -e 2026-02-28 00:40:47.224232 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-28 00:40:47.224245 | orchestrator | ++ export INTERACTIVE=false 2026-02-28 00:40:47.224257 | orchestrator | ++ INTERACTIVE=false 2026-02-28 00:40:47.224265 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-28 00:40:47.224273 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-28 00:40:47.224281 | orchestrator | + source /opt/manager-vars.sh 2026-02-28 00:40:47.224306 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-28 00:40:47.224320 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-28 00:40:47.224329 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-28 00:40:47.224337 | orchestrator | ++ CEPH_VERSION=reef 2026-02-28 00:40:47.224345 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-28 00:40:47.224355 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-28 00:40:47.224361 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-28 00:40:47.224372 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-28 00:40:47.224379 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-28 00:40:47.224387 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-28 00:40:47.224393 | orchestrator | ++ export ARA=false 2026-02-28 00:40:47.224400 | orchestrator | ++ ARA=false 2026-02-28 00:40:47.224407 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-28 00:40:47.224421 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-28 00:40:47.224427 | orchestrator | ++ export TEMPEST=true 2026-02-28 00:40:47.224433 | orchestrator | ++ TEMPEST=true 2026-02-28 00:40:47.224440 | orchestrator | ++ export IS_ZUUL=true 2026-02-28 00:40:47.224446 | orchestrator | ++ IS_ZUUL=true 2026-02-28 00:40:47.224790 | orchestrator | 2026-02-28 00:40:47.224800 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:40:47.224807 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.253 2026-02-28 00:40:47.224814 | orchestrator | ++ export EXTERNAL_API=false 2026-02-28 00:40:47.224820 | orchestrator | ++ EXTERNAL_API=false 2026-02-28 00:40:47.224826 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-28 00:40:47.224833 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-28 00:40:47.224839 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-28 00:40:47.224846 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-28 00:40:47.224852 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-28 00:40:47.224862 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-28 00:40:47.224869 | orchestrator | + echo 2026-02-28 00:40:47.224877 | orchestrator | + echo '# PULL IMAGES' 2026-02-28 00:40:47.225599 | orchestrator | # PULL IMAGES 2026-02-28 00:40:47.225607 | orchestrator | 2026-02-28 00:40:47.225612 | orchestrator | + echo 2026-02-28 00:40:47.226436 | orchestrator | ++ semver latest 7.0.0 2026-02-28 00:40:47.292271 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-28 00:40:47.292317 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-28 00:40:47.292514 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-28 00:40:49.359933 | orchestrator | 2026-02-28 00:40:49 | INFO  | Trying to run play pull-images in environment custom 2026-02-28 00:40:59.447999 | orchestrator | 2026-02-28 00:40:59 | INFO  | Prepare task for execution of pull-images. 2026-02-28 00:40:59.509352 | orchestrator | 2026-02-28 00:40:59 | INFO  | Task 6d53a4f1-52a2-4178-867f-f3d5d217157e (pull-images) was prepared for execution. 2026-02-28 00:40:59.509450 | orchestrator | 2026-02-28 00:40:59 | INFO  | Task 6d53a4f1-52a2-4178-867f-f3d5d217157e is running in background. No more output. Check ARA for logs. 2026-02-28 00:41:01.767437 | orchestrator | 2026-02-28 00:41:01 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-28 00:41:11.819901 | orchestrator | 2026-02-28 00:41:11 | INFO  | Prepare task for execution of wipe-partitions. 2026-02-28 00:41:11.889802 | orchestrator | 2026-02-28 00:41:11 | INFO  | Task ebe85d5e-916a-4a75-aa12-86a438cc5850 (wipe-partitions) was prepared for execution. 2026-02-28 00:41:11.889874 | orchestrator | 2026-02-28 00:41:11 | INFO  | It takes a moment until task ebe85d5e-916a-4a75-aa12-86a438cc5850 (wipe-partitions) has been started and output is visible here. 2026-02-28 00:41:25.821617 | orchestrator | 2026-02-28 00:41:25.821741 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-28 00:41:25.821770 | orchestrator | 2026-02-28 00:41:25.821783 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-28 00:41:25.821800 | orchestrator | Saturday 28 February 2026 00:41:16 +0000 (0:00:00.127) 0:00:00.127 ***** 2026-02-28 00:41:25.821887 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:41:25.821903 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:41:25.821915 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:41:25.821937 | orchestrator | 2026-02-28 00:41:25.821948 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-28 00:41:25.821959 | orchestrator | Saturday 28 February 2026 00:41:16 +0000 (0:00:00.573) 0:00:00.700 ***** 2026-02-28 00:41:25.821975 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:25.821986 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:25.821998 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:25.822009 | orchestrator | 2026-02-28 00:41:25.822100 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-28 00:41:25.822112 | orchestrator | Saturday 28 February 2026 00:41:17 +0000 (0:00:00.333) 0:00:01.034 ***** 2026-02-28 00:41:25.822123 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:41:25.822135 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:41:25.822146 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:41:25.822194 | orchestrator | 2026-02-28 00:41:25.822213 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-28 00:41:25.822232 | orchestrator | Saturday 28 February 2026 00:41:17 +0000 (0:00:00.541) 0:00:01.575 ***** 2026-02-28 00:41:25.822250 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:25.822268 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:25.822288 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:25.822307 | orchestrator | 2026-02-28 00:41:25.822325 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-28 00:41:25.822337 | orchestrator | Saturday 28 February 2026 00:41:17 +0000 (0:00:00.242) 0:00:01.818 ***** 2026-02-28 00:41:25.822392 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:41:25.822409 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:41:25.822421 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:41:25.822432 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:41:25.822443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:41:25.822454 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:41:25.822465 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:41:25.822475 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:41:25.822486 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:41:25.822498 | orchestrator | 2026-02-28 00:41:25.822509 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-28 00:41:25.822520 | orchestrator | Saturday 28 February 2026 00:41:19 +0000 (0:00:01.993) 0:00:03.811 ***** 2026-02-28 00:41:25.822531 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:41:25.822542 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:41:25.822553 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:41:25.822564 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:41:25.822575 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:41:25.822586 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:41:25.822596 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:41:25.822607 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:41:25.822618 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:41:25.822629 | orchestrator | 2026-02-28 00:41:25.822647 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-28 00:41:25.822659 | orchestrator | Saturday 28 February 2026 00:41:21 +0000 (0:00:01.485) 0:00:05.297 ***** 2026-02-28 00:41:25.822670 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-28 00:41:25.822681 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-28 00:41:25.822692 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-28 00:41:25.822702 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-28 00:41:25.822725 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-28 00:41:25.822736 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-28 00:41:25.822747 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-28 00:41:25.822758 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-28 00:41:25.822769 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-28 00:41:25.822780 | orchestrator | 2026-02-28 00:41:25.822791 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-28 00:41:25.822802 | orchestrator | Saturday 28 February 2026 00:41:24 +0000 (0:00:02.996) 0:00:08.293 ***** 2026-02-28 00:41:25.822813 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:41:25.822824 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:41:25.822835 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:41:25.822846 | orchestrator | 2026-02-28 00:41:25.822857 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-28 00:41:25.822908 | orchestrator | Saturday 28 February 2026 00:41:24 +0000 (0:00:00.591) 0:00:08.885 ***** 2026-02-28 00:41:25.822929 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:41:25.822946 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:41:25.822965 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:41:25.822985 | orchestrator | 2026-02-28 00:41:25.823003 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:41:25.823023 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:25.823042 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:25.823112 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:25.823134 | orchestrator | 2026-02-28 00:41:25.823155 | orchestrator | 2026-02-28 00:41:25.823175 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:41:25.823195 | orchestrator | Saturday 28 February 2026 00:41:25 +0000 (0:00:00.636) 0:00:09.521 ***** 2026-02-28 00:41:25.823215 | orchestrator | =============================================================================== 2026-02-28 00:41:25.823235 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.00s 2026-02-28 00:41:25.823255 | orchestrator | Check device availability ----------------------------------------------- 1.99s 2026-02-28 00:41:25.823273 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.49s 2026-02-28 00:41:25.823292 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-02-28 00:41:25.823310 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-02-28 00:41:25.823328 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2026-02-28 00:41:25.823348 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-02-28 00:41:25.823366 | orchestrator | Remove all rook related logical devices --------------------------------- 0.33s 2026-02-28 00:41:25.823385 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-02-28 00:41:38.158442 | orchestrator | 2026-02-28 00:41:38 | INFO  | Prepare task for execution of facts. 2026-02-28 00:41:38.237861 | orchestrator | 2026-02-28 00:41:38 | INFO  | Task e7c52b19-e70d-4124-802e-c6268ecd085a (facts) was prepared for execution. 2026-02-28 00:41:38.237940 | orchestrator | 2026-02-28 00:41:38 | INFO  | It takes a moment until task e7c52b19-e70d-4124-802e-c6268ecd085a (facts) has been started and output is visible here. 2026-02-28 00:41:51.434494 | orchestrator | 2026-02-28 00:41:51.434610 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-28 00:41:51.434627 | orchestrator | 2026-02-28 00:41:51.434666 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:41:51.434678 | orchestrator | Saturday 28 February 2026 00:41:42 +0000 (0:00:00.287) 0:00:00.287 ***** 2026-02-28 00:41:51.434690 | orchestrator | ok: [testbed-manager] 2026-02-28 00:41:51.434702 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:41:51.434713 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:41:51.434724 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:41:51.434735 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:41:51.434746 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:41:51.434757 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:41:51.434767 | orchestrator | 2026-02-28 00:41:51.434779 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:41:51.434790 | orchestrator | Saturday 28 February 2026 00:41:43 +0000 (0:00:01.100) 0:00:01.387 ***** 2026-02-28 00:41:51.434801 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:41:51.434813 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:41:51.434824 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:41:51.434834 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:41:51.434845 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:51.434856 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:51.434867 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:51.434878 | orchestrator | 2026-02-28 00:41:51.434889 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:41:51.434917 | orchestrator | 2026-02-28 00:41:51.434929 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:41:51.434941 | orchestrator | Saturday 28 February 2026 00:41:44 +0000 (0:00:01.296) 0:00:02.684 ***** 2026-02-28 00:41:51.434952 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:41:51.434963 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:41:51.434974 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:41:51.434985 | orchestrator | ok: [testbed-manager] 2026-02-28 00:41:51.434996 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:41:51.435007 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:41:51.435018 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:41:51.435031 | orchestrator | 2026-02-28 00:41:51.435043 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:41:51.435081 | orchestrator | 2026-02-28 00:41:51.435094 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:41:51.435121 | orchestrator | Saturday 28 February 2026 00:41:50 +0000 (0:00:05.634) 0:00:08.318 ***** 2026-02-28 00:41:51.435144 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:41:51.435157 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:41:51.435170 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:41:51.435182 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:41:51.435194 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:41:51.435206 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:41:51.435218 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:41:51.435231 | orchestrator | 2026-02-28 00:41:51.435243 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:41:51.435257 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435271 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435284 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435297 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435309 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435331 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435343 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:41:51.435357 | orchestrator | 2026-02-28 00:41:51.435369 | orchestrator | 2026-02-28 00:41:51.435381 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:41:51.435392 | orchestrator | Saturday 28 February 2026 00:41:51 +0000 (0:00:00.510) 0:00:08.828 ***** 2026-02-28 00:41:51.435404 | orchestrator | =============================================================================== 2026-02-28 00:41:51.435415 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.63s 2026-02-28 00:41:51.435426 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2026-02-28 00:41:51.435437 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2026-02-28 00:41:51.435448 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-02-28 00:41:53.753098 | orchestrator | 2026-02-28 00:41:53 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-02-28 00:41:53.814629 | orchestrator | 2026-02-28 00:41:53 | INFO  | Task 931175cf-1aa1-4f77-ac4d-7303a31cf361 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-28 00:41:53.814721 | orchestrator | 2026-02-28 00:41:53 | INFO  | It takes a moment until task 931175cf-1aa1-4f77-ac4d-7303a31cf361 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-28 00:42:05.495652 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:42:05.495744 | orchestrator | 2.16.14 2026-02-28 00:42:05.495755 | orchestrator | 2026-02-28 00:42:05.495763 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:05.495772 | orchestrator | 2026-02-28 00:42:05.495779 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:05.495786 | orchestrator | Saturday 28 February 2026 00:41:58 +0000 (0:00:00.320) 0:00:00.320 ***** 2026-02-28 00:42:05.495794 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:05.495801 | orchestrator | 2026-02-28 00:42:05.495809 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:05.495815 | orchestrator | Saturday 28 February 2026 00:41:58 +0000 (0:00:00.245) 0:00:00.565 ***** 2026-02-28 00:42:05.495823 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:05.495830 | orchestrator | 2026-02-28 00:42:05.495837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.495844 | orchestrator | Saturday 28 February 2026 00:41:58 +0000 (0:00:00.223) 0:00:00.788 ***** 2026-02-28 00:42:05.495858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:42:05.495865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:42:05.495872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:42:05.495879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:42:05.495886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:42:05.495893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:42:05.495899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:42:05.495906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:42:05.495913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-28 00:42:05.495920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:42:05.495943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:42:05.495950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:42:05.495957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:42:05.495963 | orchestrator | 2026-02-28 00:42:05.495970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.495977 | orchestrator | Saturday 28 February 2026 00:41:59 +0000 (0:00:00.470) 0:00:01.259 ***** 2026-02-28 00:42:05.495983 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.495990 | orchestrator | 2026-02-28 00:42:05.495999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496011 | orchestrator | Saturday 28 February 2026 00:41:59 +0000 (0:00:00.203) 0:00:01.463 ***** 2026-02-28 00:42:05.496023 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496035 | orchestrator | 2026-02-28 00:42:05.496107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496120 | orchestrator | Saturday 28 February 2026 00:41:59 +0000 (0:00:00.201) 0:00:01.664 ***** 2026-02-28 00:42:05.496131 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496142 | orchestrator | 2026-02-28 00:42:05.496153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496165 | orchestrator | Saturday 28 February 2026 00:41:59 +0000 (0:00:00.193) 0:00:01.857 ***** 2026-02-28 00:42:05.496173 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496180 | orchestrator | 2026-02-28 00:42:05.496191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496204 | orchestrator | Saturday 28 February 2026 00:41:59 +0000 (0:00:00.197) 0:00:02.055 ***** 2026-02-28 00:42:05.496216 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496223 | orchestrator | 2026-02-28 00:42:05.496230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496237 | orchestrator | Saturday 28 February 2026 00:42:00 +0000 (0:00:00.190) 0:00:02.245 ***** 2026-02-28 00:42:05.496243 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496250 | orchestrator | 2026-02-28 00:42:05.496257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496264 | orchestrator | Saturday 28 February 2026 00:42:00 +0000 (0:00:00.221) 0:00:02.467 ***** 2026-02-28 00:42:05.496270 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496277 | orchestrator | 2026-02-28 00:42:05.496284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496291 | orchestrator | Saturday 28 February 2026 00:42:00 +0000 (0:00:00.203) 0:00:02.670 ***** 2026-02-28 00:42:05.496297 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496304 | orchestrator | 2026-02-28 00:42:05.496311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496318 | orchestrator | Saturday 28 February 2026 00:42:00 +0000 (0:00:00.208) 0:00:02.878 ***** 2026-02-28 00:42:05.496324 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c) 2026-02-28 00:42:05.496332 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c) 2026-02-28 00:42:05.496339 | orchestrator | 2026-02-28 00:42:05.496346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496366 | orchestrator | Saturday 28 February 2026 00:42:01 +0000 (0:00:00.403) 0:00:03.282 ***** 2026-02-28 00:42:05.496377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185) 2026-02-28 00:42:05.496388 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185) 2026-02-28 00:42:05.496399 | orchestrator | 2026-02-28 00:42:05.496421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496443 | orchestrator | Saturday 28 February 2026 00:42:01 +0000 (0:00:00.647) 0:00:03.929 ***** 2026-02-28 00:42:05.496450 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102) 2026-02-28 00:42:05.496457 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102) 2026-02-28 00:42:05.496467 | orchestrator | 2026-02-28 00:42:05.496480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496492 | orchestrator | Saturday 28 February 2026 00:42:02 +0000 (0:00:00.639) 0:00:04.568 ***** 2026-02-28 00:42:05.496503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9) 2026-02-28 00:42:05.496510 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9) 2026-02-28 00:42:05.496517 | orchestrator | 2026-02-28 00:42:05.496524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:05.496530 | orchestrator | Saturday 28 February 2026 00:42:03 +0000 (0:00:00.824) 0:00:05.392 ***** 2026-02-28 00:42:05.496537 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:05.496548 | orchestrator | 2026-02-28 00:42:05.496560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496571 | orchestrator | Saturday 28 February 2026 00:42:03 +0000 (0:00:00.336) 0:00:05.729 ***** 2026-02-28 00:42:05.496582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:42:05.496589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:42:05.496596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:42:05.496602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:42:05.496609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:42:05.496616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:42:05.496622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:42:05.496629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:42:05.496636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-28 00:42:05.496643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:42:05.496649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:42:05.496656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:42:05.496663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:42:05.496670 | orchestrator | 2026-02-28 00:42:05.496676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496683 | orchestrator | Saturday 28 February 2026 00:42:04 +0000 (0:00:00.399) 0:00:06.129 ***** 2026-02-28 00:42:05.496690 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496696 | orchestrator | 2026-02-28 00:42:05.496703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496710 | orchestrator | Saturday 28 February 2026 00:42:04 +0000 (0:00:00.226) 0:00:06.355 ***** 2026-02-28 00:42:05.496716 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496723 | orchestrator | 2026-02-28 00:42:05.496730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496736 | orchestrator | Saturday 28 February 2026 00:42:04 +0000 (0:00:00.186) 0:00:06.541 ***** 2026-02-28 00:42:05.496743 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496755 | orchestrator | 2026-02-28 00:42:05.496762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496769 | orchestrator | Saturday 28 February 2026 00:42:04 +0000 (0:00:00.222) 0:00:06.764 ***** 2026-02-28 00:42:05.496775 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496782 | orchestrator | 2026-02-28 00:42:05.496789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496795 | orchestrator | Saturday 28 February 2026 00:42:04 +0000 (0:00:00.198) 0:00:06.963 ***** 2026-02-28 00:42:05.496802 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496809 | orchestrator | 2026-02-28 00:42:05.496816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496822 | orchestrator | Saturday 28 February 2026 00:42:05 +0000 (0:00:00.210) 0:00:07.173 ***** 2026-02-28 00:42:05.496829 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496836 | orchestrator | 2026-02-28 00:42:05.496843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:05.496849 | orchestrator | Saturday 28 February 2026 00:42:05 +0000 (0:00:00.193) 0:00:07.367 ***** 2026-02-28 00:42:05.496856 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:05.496863 | orchestrator | 2026-02-28 00:42:05.496875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:12.901266 | orchestrator | Saturday 28 February 2026 00:42:05 +0000 (0:00:00.218) 0:00:07.585 ***** 2026-02-28 00:42:12.901342 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901350 | orchestrator | 2026-02-28 00:42:12.901356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:12.901361 | orchestrator | Saturday 28 February 2026 00:42:05 +0000 (0:00:00.233) 0:00:07.818 ***** 2026-02-28 00:42:12.901366 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-28 00:42:12.901371 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-28 00:42:12.901376 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-28 00:42:12.901380 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-28 00:42:12.901384 | orchestrator | 2026-02-28 00:42:12.901389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:12.901404 | orchestrator | Saturday 28 February 2026 00:42:06 +0000 (0:00:01.065) 0:00:08.884 ***** 2026-02-28 00:42:12.901408 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901412 | orchestrator | 2026-02-28 00:42:12.901417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:12.901421 | orchestrator | Saturday 28 February 2026 00:42:06 +0000 (0:00:00.191) 0:00:09.075 ***** 2026-02-28 00:42:12.901426 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901430 | orchestrator | 2026-02-28 00:42:12.901434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:12.901438 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.196) 0:00:09.272 ***** 2026-02-28 00:42:12.901443 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901447 | orchestrator | 2026-02-28 00:42:12.901451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:12.901455 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.198) 0:00:09.470 ***** 2026-02-28 00:42:12.901460 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901464 | orchestrator | 2026-02-28 00:42:12.901468 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:42:12.901473 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.208) 0:00:09.679 ***** 2026-02-28 00:42:12.901477 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:42:12.901481 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:42:12.901486 | orchestrator | 2026-02-28 00:42:12.901490 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:42:12.901494 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.188) 0:00:09.867 ***** 2026-02-28 00:42:12.901511 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901515 | orchestrator | 2026-02-28 00:42:12.901519 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:42:12.901524 | orchestrator | Saturday 28 February 2026 00:42:07 +0000 (0:00:00.139) 0:00:10.006 ***** 2026-02-28 00:42:12.901529 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901593 | orchestrator | 2026-02-28 00:42:12.901599 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:42:12.901606 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.143) 0:00:10.150 ***** 2026-02-28 00:42:12.901612 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901618 | orchestrator | 2026-02-28 00:42:12.901624 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:42:12.901630 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.136) 0:00:10.287 ***** 2026-02-28 00:42:12.901637 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:12.901644 | orchestrator | 2026-02-28 00:42:12.901651 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:42:12.901657 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.143) 0:00:10.431 ***** 2026-02-28 00:42:12.901665 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e2365387-977d-5b6c-ac86-7516065bddb2'}}) 2026-02-28 00:42:12.901672 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c221fe87-4514-5691-85ae-4cf2e32a6a79'}}) 2026-02-28 00:42:12.901678 | orchestrator | 2026-02-28 00:42:12.901685 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:42:12.901692 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.166) 0:00:10.597 ***** 2026-02-28 00:42:12.901700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e2365387-977d-5b6c-ac86-7516065bddb2'}})  2026-02-28 00:42:12.901713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c221fe87-4514-5691-85ae-4cf2e32a6a79'}})  2026-02-28 00:42:12.901725 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901731 | orchestrator | 2026-02-28 00:42:12.901738 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:42:12.901745 | orchestrator | Saturday 28 February 2026 00:42:08 +0000 (0:00:00.185) 0:00:10.783 ***** 2026-02-28 00:42:12.901752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e2365387-977d-5b6c-ac86-7516065bddb2'}})  2026-02-28 00:42:12.901759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c221fe87-4514-5691-85ae-4cf2e32a6a79'}})  2026-02-28 00:42:12.901766 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901772 | orchestrator | 2026-02-28 00:42:12.901779 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:42:12.901785 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.365) 0:00:11.149 ***** 2026-02-28 00:42:12.901792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e2365387-977d-5b6c-ac86-7516065bddb2'}})  2026-02-28 00:42:12.901815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c221fe87-4514-5691-85ae-4cf2e32a6a79'}})  2026-02-28 00:42:12.901822 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901829 | orchestrator | 2026-02-28 00:42:12.901835 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:42:12.901842 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.169) 0:00:11.318 ***** 2026-02-28 00:42:12.901849 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:12.901855 | orchestrator | 2026-02-28 00:42:12.901862 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:42:12.901870 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.150) 0:00:11.469 ***** 2026-02-28 00:42:12.901876 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:42:12.901892 | orchestrator | 2026-02-28 00:42:12.901899 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:42:12.901906 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.136) 0:00:11.606 ***** 2026-02-28 00:42:12.901913 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901921 | orchestrator | 2026-02-28 00:42:12.901927 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:42:12.901935 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.128) 0:00:11.735 ***** 2026-02-28 00:42:12.901942 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901950 | orchestrator | 2026-02-28 00:42:12.901955 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:42:12.901960 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.140) 0:00:11.876 ***** 2026-02-28 00:42:12.901965 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.901969 | orchestrator | 2026-02-28 00:42:12.901974 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:42:12.901979 | orchestrator | Saturday 28 February 2026 00:42:09 +0000 (0:00:00.136) 0:00:12.013 ***** 2026-02-28 00:42:12.901984 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:42:12.901989 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:12.901994 | orchestrator |  "sdb": { 2026-02-28 00:42:12.901999 | orchestrator |  "osd_lvm_uuid": "e2365387-977d-5b6c-ac86-7516065bddb2" 2026-02-28 00:42:12.902004 | orchestrator |  }, 2026-02-28 00:42:12.902009 | orchestrator |  "sdc": { 2026-02-28 00:42:12.902070 | orchestrator |  "osd_lvm_uuid": "c221fe87-4514-5691-85ae-4cf2e32a6a79" 2026-02-28 00:42:12.902075 | orchestrator |  } 2026-02-28 00:42:12.902081 | orchestrator |  } 2026-02-28 00:42:12.902086 | orchestrator | } 2026-02-28 00:42:12.902091 | orchestrator | 2026-02-28 00:42:12.902096 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:42:12.902101 | orchestrator | Saturday 28 February 2026 00:42:10 +0000 (0:00:00.161) 0:00:12.174 ***** 2026-02-28 00:42:12.902106 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.902111 | orchestrator | 2026-02-28 00:42:12.902115 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:42:12.902120 | orchestrator | Saturday 28 February 2026 00:42:10 +0000 (0:00:00.125) 0:00:12.299 ***** 2026-02-28 00:42:12.902125 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.902130 | orchestrator | 2026-02-28 00:42:12.902135 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:42:12.902140 | orchestrator | Saturday 28 February 2026 00:42:10 +0000 (0:00:00.135) 0:00:12.435 ***** 2026-02-28 00:42:12.902145 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:42:12.902150 | orchestrator | 2026-02-28 00:42:12.902155 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:42:12.902160 | orchestrator | Saturday 28 February 2026 00:42:10 +0000 (0:00:00.142) 0:00:12.577 ***** 2026-02-28 00:42:12.902165 | orchestrator | changed: [testbed-node-3] => { 2026-02-28 00:42:12.902169 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:42:12.902175 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:12.902180 | orchestrator |  "sdb": { 2026-02-28 00:42:12.902185 | orchestrator |  "osd_lvm_uuid": "e2365387-977d-5b6c-ac86-7516065bddb2" 2026-02-28 00:42:12.902190 | orchestrator |  }, 2026-02-28 00:42:12.902195 | orchestrator |  "sdc": { 2026-02-28 00:42:12.902199 | orchestrator |  "osd_lvm_uuid": "c221fe87-4514-5691-85ae-4cf2e32a6a79" 2026-02-28 00:42:12.902204 | orchestrator |  } 2026-02-28 00:42:12.902209 | orchestrator |  }, 2026-02-28 00:42:12.902213 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:42:12.902217 | orchestrator |  { 2026-02-28 00:42:12.902222 | orchestrator |  "data": "osd-block-e2365387-977d-5b6c-ac86-7516065bddb2", 2026-02-28 00:42:12.902226 | orchestrator |  "data_vg": "ceph-e2365387-977d-5b6c-ac86-7516065bddb2" 2026-02-28 00:42:12.902235 | orchestrator |  }, 2026-02-28 00:42:12.902239 | orchestrator |  { 2026-02-28 00:42:12.902244 | orchestrator |  "data": "osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79", 2026-02-28 00:42:12.902255 | orchestrator |  "data_vg": "ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79" 2026-02-28 00:42:12.902260 | orchestrator |  } 2026-02-28 00:42:12.902264 | orchestrator |  ] 2026-02-28 00:42:12.902270 | orchestrator |  } 2026-02-28 00:42:12.902277 | orchestrator | } 2026-02-28 00:42:12.902283 | orchestrator | 2026-02-28 00:42:12.902290 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:42:12.902303 | orchestrator | Saturday 28 February 2026 00:42:10 +0000 (0:00:00.407) 0:00:12.985 ***** 2026-02-28 00:42:12.902310 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:12.902316 | orchestrator | 2026-02-28 00:42:12.902322 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:12.902329 | orchestrator | 2026-02-28 00:42:12.902336 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:12.902343 | orchestrator | Saturday 28 February 2026 00:42:12 +0000 (0:00:01.592) 0:00:14.577 ***** 2026-02-28 00:42:12.902349 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:12.902356 | orchestrator | 2026-02-28 00:42:12.902363 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:12.902370 | orchestrator | Saturday 28 February 2026 00:42:12 +0000 (0:00:00.221) 0:00:14.799 ***** 2026-02-28 00:42:12.902377 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:12.902386 | orchestrator | 2026-02-28 00:42:12.902401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.631995 | orchestrator | Saturday 28 February 2026 00:42:12 +0000 (0:00:00.194) 0:00:14.994 ***** 2026-02-28 00:42:19.757630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:42:19.757706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:42:19.757719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:42:19.757729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:42:19.757740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:42:19.757750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:42:19.757760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:42:19.757774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:42:19.757784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-28 00:42:19.757794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:42:19.757804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:42:19.757813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:42:19.757866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:42:19.757878 | orchestrator | 2026-02-28 00:42:19.757899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.757909 | orchestrator | Saturday 28 February 2026 00:42:13 +0000 (0:00:00.347) 0:00:15.342 ***** 2026-02-28 00:42:19.757919 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.757930 | orchestrator | 2026-02-28 00:42:19.757940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.757950 | orchestrator | Saturday 28 February 2026 00:42:13 +0000 (0:00:00.200) 0:00:15.543 ***** 2026-02-28 00:42:19.757983 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.757994 | orchestrator | 2026-02-28 00:42:19.758004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758080 | orchestrator | Saturday 28 February 2026 00:42:13 +0000 (0:00:00.180) 0:00:15.723 ***** 2026-02-28 00:42:19.758093 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758103 | orchestrator | 2026-02-28 00:42:19.758112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758122 | orchestrator | Saturday 28 February 2026 00:42:13 +0000 (0:00:00.181) 0:00:15.904 ***** 2026-02-28 00:42:19.758132 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758142 | orchestrator | 2026-02-28 00:42:19.758152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758162 | orchestrator | Saturday 28 February 2026 00:42:13 +0000 (0:00:00.171) 0:00:16.076 ***** 2026-02-28 00:42:19.758171 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758181 | orchestrator | 2026-02-28 00:42:19.758191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758200 | orchestrator | Saturday 28 February 2026 00:42:14 +0000 (0:00:00.512) 0:00:16.589 ***** 2026-02-28 00:42:19.758210 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758219 | orchestrator | 2026-02-28 00:42:19.758229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758239 | orchestrator | Saturday 28 February 2026 00:42:14 +0000 (0:00:00.185) 0:00:16.774 ***** 2026-02-28 00:42:19.758249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758258 | orchestrator | 2026-02-28 00:42:19.758268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758277 | orchestrator | Saturday 28 February 2026 00:42:14 +0000 (0:00:00.162) 0:00:16.936 ***** 2026-02-28 00:42:19.758287 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758297 | orchestrator | 2026-02-28 00:42:19.758307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758316 | orchestrator | Saturday 28 February 2026 00:42:15 +0000 (0:00:00.211) 0:00:17.148 ***** 2026-02-28 00:42:19.758326 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc) 2026-02-28 00:42:19.758337 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc) 2026-02-28 00:42:19.758347 | orchestrator | 2026-02-28 00:42:19.758356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758366 | orchestrator | Saturday 28 February 2026 00:42:15 +0000 (0:00:00.355) 0:00:17.504 ***** 2026-02-28 00:42:19.758376 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d) 2026-02-28 00:42:19.758385 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d) 2026-02-28 00:42:19.758395 | orchestrator | 2026-02-28 00:42:19.758405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758414 | orchestrator | Saturday 28 February 2026 00:42:15 +0000 (0:00:00.368) 0:00:17.872 ***** 2026-02-28 00:42:19.758424 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd) 2026-02-28 00:42:19.758434 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd) 2026-02-28 00:42:19.758444 | orchestrator | 2026-02-28 00:42:19.758454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758493 | orchestrator | Saturday 28 February 2026 00:42:16 +0000 (0:00:00.402) 0:00:18.275 ***** 2026-02-28 00:42:19.758503 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0) 2026-02-28 00:42:19.758513 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0) 2026-02-28 00:42:19.758523 | orchestrator | 2026-02-28 00:42:19.758541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:19.758551 | orchestrator | Saturday 28 February 2026 00:42:16 +0000 (0:00:00.309) 0:00:18.585 ***** 2026-02-28 00:42:19.758561 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:19.758571 | orchestrator | 2026-02-28 00:42:19.758580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758590 | orchestrator | Saturday 28 February 2026 00:42:16 +0000 (0:00:00.302) 0:00:18.887 ***** 2026-02-28 00:42:19.758600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:42:19.758610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:42:19.758628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:42:19.758638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:42:19.758647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:42:19.758657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:42:19.758667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:42:19.758676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:42:19.758686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-28 00:42:19.758695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:42:19.758705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:42:19.758714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:42:19.758724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:42:19.758734 | orchestrator | 2026-02-28 00:42:19.758743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758753 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.279) 0:00:19.167 ***** 2026-02-28 00:42:19.758762 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758772 | orchestrator | 2026-02-28 00:42:19.758782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758791 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.395) 0:00:19.563 ***** 2026-02-28 00:42:19.758801 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758811 | orchestrator | 2026-02-28 00:42:19.758821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758830 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.188) 0:00:19.751 ***** 2026-02-28 00:42:19.758840 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758850 | orchestrator | 2026-02-28 00:42:19.758859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758869 | orchestrator | Saturday 28 February 2026 00:42:17 +0000 (0:00:00.158) 0:00:19.910 ***** 2026-02-28 00:42:19.758879 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758888 | orchestrator | 2026-02-28 00:42:19.758898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758907 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.186) 0:00:20.097 ***** 2026-02-28 00:42:19.758917 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758927 | orchestrator | 2026-02-28 00:42:19.758937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758946 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.217) 0:00:20.315 ***** 2026-02-28 00:42:19.758956 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.758973 | orchestrator | 2026-02-28 00:42:19.758983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.758992 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.186) 0:00:20.502 ***** 2026-02-28 00:42:19.759002 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.759012 | orchestrator | 2026-02-28 00:42:19.759021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.759031 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.180) 0:00:20.682 ***** 2026-02-28 00:42:19.759095 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:19.759113 | orchestrator | 2026-02-28 00:42:19.759129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.759145 | orchestrator | Saturday 28 February 2026 00:42:18 +0000 (0:00:00.178) 0:00:20.860 ***** 2026-02-28 00:42:19.759162 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-28 00:42:19.759179 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-28 00:42:19.759197 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-28 00:42:19.759209 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-28 00:42:19.759219 | orchestrator | 2026-02-28 00:42:19.759229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:19.759239 | orchestrator | Saturday 28 February 2026 00:42:19 +0000 (0:00:00.736) 0:00:21.597 ***** 2026-02-28 00:42:19.759249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537154 | orchestrator | 2026-02-28 00:42:26.537265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:26.537296 | orchestrator | Saturday 28 February 2026 00:42:19 +0000 (0:00:00.204) 0:00:21.801 ***** 2026-02-28 00:42:26.537309 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537332 | orchestrator | 2026-02-28 00:42:26.537344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:26.537355 | orchestrator | Saturday 28 February 2026 00:42:19 +0000 (0:00:00.187) 0:00:21.988 ***** 2026-02-28 00:42:26.537366 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537377 | orchestrator | 2026-02-28 00:42:26.537389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:26.537400 | orchestrator | Saturday 28 February 2026 00:42:20 +0000 (0:00:00.179) 0:00:22.168 ***** 2026-02-28 00:42:26.537411 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537422 | orchestrator | 2026-02-28 00:42:26.537433 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:42:26.537444 | orchestrator | Saturday 28 February 2026 00:42:20 +0000 (0:00:00.659) 0:00:22.828 ***** 2026-02-28 00:42:26.537455 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:42:26.537466 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:42:26.537477 | orchestrator | 2026-02-28 00:42:26.537489 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:42:26.537522 | orchestrator | Saturday 28 February 2026 00:42:20 +0000 (0:00:00.174) 0:00:23.003 ***** 2026-02-28 00:42:26.537543 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537561 | orchestrator | 2026-02-28 00:42:26.537579 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:42:26.537598 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.136) 0:00:23.139 ***** 2026-02-28 00:42:26.537616 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537635 | orchestrator | 2026-02-28 00:42:26.537659 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:42:26.537686 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.138) 0:00:23.277 ***** 2026-02-28 00:42:26.537707 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537727 | orchestrator | 2026-02-28 00:42:26.537747 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:42:26.537766 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.151) 0:00:23.429 ***** 2026-02-28 00:42:26.537816 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:26.537837 | orchestrator | 2026-02-28 00:42:26.537854 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:42:26.537866 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.153) 0:00:23.583 ***** 2026-02-28 00:42:26.537879 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}}) 2026-02-28 00:42:26.537892 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d8e79be-6c7a-5031-8b8d-1755de447a00'}}) 2026-02-28 00:42:26.537905 | orchestrator | 2026-02-28 00:42:26.537917 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:42:26.537930 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.173) 0:00:23.757 ***** 2026-02-28 00:42:26.537943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}})  2026-02-28 00:42:26.537957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d8e79be-6c7a-5031-8b8d-1755de447a00'}})  2026-02-28 00:42:26.537969 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.537982 | orchestrator | 2026-02-28 00:42:26.537995 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:42:26.538007 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.141) 0:00:23.898 ***** 2026-02-28 00:42:26.538153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}})  2026-02-28 00:42:26.538167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d8e79be-6c7a-5031-8b8d-1755de447a00'}})  2026-02-28 00:42:26.538179 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538190 | orchestrator | 2026-02-28 00:42:26.538201 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:42:26.538212 | orchestrator | Saturday 28 February 2026 00:42:21 +0000 (0:00:00.171) 0:00:24.070 ***** 2026-02-28 00:42:26.538223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}})  2026-02-28 00:42:26.538234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d8e79be-6c7a-5031-8b8d-1755de447a00'}})  2026-02-28 00:42:26.538245 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538256 | orchestrator | 2026-02-28 00:42:26.538267 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:42:26.538277 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.147) 0:00:24.217 ***** 2026-02-28 00:42:26.538288 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:26.538299 | orchestrator | 2026-02-28 00:42:26.538310 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:42:26.538321 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.135) 0:00:24.352 ***** 2026-02-28 00:42:26.538332 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:42:26.538343 | orchestrator | 2026-02-28 00:42:26.538353 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:42:26.538364 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.147) 0:00:24.500 ***** 2026-02-28 00:42:26.538397 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538409 | orchestrator | 2026-02-28 00:42:26.538428 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:42:26.538447 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.356) 0:00:24.857 ***** 2026-02-28 00:42:26.538478 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538497 | orchestrator | 2026-02-28 00:42:26.538515 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:42:26.538533 | orchestrator | Saturday 28 February 2026 00:42:22 +0000 (0:00:00.143) 0:00:25.000 ***** 2026-02-28 00:42:26.538551 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538585 | orchestrator | 2026-02-28 00:42:26.538604 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:42:26.538622 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.178) 0:00:25.179 ***** 2026-02-28 00:42:26.538641 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:42:26.538661 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:26.538682 | orchestrator |  "sdb": { 2026-02-28 00:42:26.538701 | orchestrator |  "osd_lvm_uuid": "4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e" 2026-02-28 00:42:26.538721 | orchestrator |  }, 2026-02-28 00:42:26.538732 | orchestrator |  "sdc": { 2026-02-28 00:42:26.538744 | orchestrator |  "osd_lvm_uuid": "4d8e79be-6c7a-5031-8b8d-1755de447a00" 2026-02-28 00:42:26.538755 | orchestrator |  } 2026-02-28 00:42:26.538766 | orchestrator |  } 2026-02-28 00:42:26.538777 | orchestrator | } 2026-02-28 00:42:26.538789 | orchestrator | 2026-02-28 00:42:26.538800 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:42:26.538811 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.176) 0:00:25.356 ***** 2026-02-28 00:42:26.538822 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538833 | orchestrator | 2026-02-28 00:42:26.538844 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:42:26.538855 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.202) 0:00:25.558 ***** 2026-02-28 00:42:26.538866 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538877 | orchestrator | 2026-02-28 00:42:26.538888 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:42:26.538899 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.185) 0:00:25.743 ***** 2026-02-28 00:42:26.538910 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:42:26.538921 | orchestrator | 2026-02-28 00:42:26.538932 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:42:26.538952 | orchestrator | Saturday 28 February 2026 00:42:23 +0000 (0:00:00.178) 0:00:25.922 ***** 2026-02-28 00:42:26.538964 | orchestrator | changed: [testbed-node-4] => { 2026-02-28 00:42:26.538975 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:42:26.538986 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:26.538997 | orchestrator |  "sdb": { 2026-02-28 00:42:26.539009 | orchestrator |  "osd_lvm_uuid": "4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e" 2026-02-28 00:42:26.539020 | orchestrator |  }, 2026-02-28 00:42:26.539031 | orchestrator |  "sdc": { 2026-02-28 00:42:26.539068 | orchestrator |  "osd_lvm_uuid": "4d8e79be-6c7a-5031-8b8d-1755de447a00" 2026-02-28 00:42:26.539080 | orchestrator |  } 2026-02-28 00:42:26.539091 | orchestrator |  }, 2026-02-28 00:42:26.539102 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:42:26.539113 | orchestrator |  { 2026-02-28 00:42:26.539124 | orchestrator |  "data": "osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e", 2026-02-28 00:42:26.539135 | orchestrator |  "data_vg": "ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e" 2026-02-28 00:42:26.539146 | orchestrator |  }, 2026-02-28 00:42:26.539157 | orchestrator |  { 2026-02-28 00:42:26.539168 | orchestrator |  "data": "osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00", 2026-02-28 00:42:26.539179 | orchestrator |  "data_vg": "ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00" 2026-02-28 00:42:26.539190 | orchestrator |  } 2026-02-28 00:42:26.539201 | orchestrator |  ] 2026-02-28 00:42:26.539212 | orchestrator |  } 2026-02-28 00:42:26.539223 | orchestrator | } 2026-02-28 00:42:26.539234 | orchestrator | 2026-02-28 00:42:26.539271 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:42:26.539282 | orchestrator | Saturday 28 February 2026 00:42:24 +0000 (0:00:00.272) 0:00:26.195 ***** 2026-02-28 00:42:26.539293 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:26.539304 | orchestrator | 2026-02-28 00:42:26.539324 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-28 00:42:26.539335 | orchestrator | 2026-02-28 00:42:26.539346 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:42:26.539357 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:01.177) 0:00:27.372 ***** 2026-02-28 00:42:26.539368 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:26.539379 | orchestrator | 2026-02-28 00:42:26.539390 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:42:26.539401 | orchestrator | Saturday 28 February 2026 00:42:25 +0000 (0:00:00.703) 0:00:28.075 ***** 2026-02-28 00:42:26.539412 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:26.539423 | orchestrator | 2026-02-28 00:42:26.539434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:26.539445 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.236) 0:00:28.312 ***** 2026-02-28 00:42:26.539456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:42:26.539467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:42:26.539478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:42:26.539489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:42:26.539500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:42:26.539522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:42:35.320994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:42:35.321149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:42:35.321164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-28 00:42:35.321176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:42:35.321186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:42:35.321196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:42:35.321206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:42:35.321217 | orchestrator | 2026-02-28 00:42:35.321228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321239 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.408) 0:00:28.721 ***** 2026-02-28 00:42:35.321249 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321260 | orchestrator | 2026-02-28 00:42:35.321270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321280 | orchestrator | Saturday 28 February 2026 00:42:26 +0000 (0:00:00.233) 0:00:28.954 ***** 2026-02-28 00:42:35.321290 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321300 | orchestrator | 2026-02-28 00:42:35.321309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321319 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.216) 0:00:29.171 ***** 2026-02-28 00:42:35.321329 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321339 | orchestrator | 2026-02-28 00:42:35.321349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321359 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.193) 0:00:29.364 ***** 2026-02-28 00:42:35.321369 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321378 | orchestrator | 2026-02-28 00:42:35.321388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321398 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.291) 0:00:29.656 ***** 2026-02-28 00:42:35.321430 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321441 | orchestrator | 2026-02-28 00:42:35.321451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321461 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.199) 0:00:29.855 ***** 2026-02-28 00:42:35.321470 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321480 | orchestrator | 2026-02-28 00:42:35.321490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321499 | orchestrator | Saturday 28 February 2026 00:42:27 +0000 (0:00:00.181) 0:00:30.037 ***** 2026-02-28 00:42:35.321509 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321519 | orchestrator | 2026-02-28 00:42:35.321529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321541 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.200) 0:00:30.238 ***** 2026-02-28 00:42:35.321552 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.321563 | orchestrator | 2026-02-28 00:42:35.321574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321585 | orchestrator | Saturday 28 February 2026 00:42:28 +0000 (0:00:00.193) 0:00:30.432 ***** 2026-02-28 00:42:35.321597 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462) 2026-02-28 00:42:35.321609 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462) 2026-02-28 00:42:35.321620 | orchestrator | 2026-02-28 00:42:35.321631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321643 | orchestrator | Saturday 28 February 2026 00:42:29 +0000 (0:00:00.848) 0:00:31.280 ***** 2026-02-28 00:42:35.321670 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660) 2026-02-28 00:42:35.321680 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660) 2026-02-28 00:42:35.321690 | orchestrator | 2026-02-28 00:42:35.321700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321710 | orchestrator | Saturday 28 February 2026 00:42:29 +0000 (0:00:00.461) 0:00:31.741 ***** 2026-02-28 00:42:35.321719 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14) 2026-02-28 00:42:35.321729 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14) 2026-02-28 00:42:35.321739 | orchestrator | 2026-02-28 00:42:35.321749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321758 | orchestrator | Saturday 28 February 2026 00:42:30 +0000 (0:00:00.459) 0:00:32.200 ***** 2026-02-28 00:42:35.321768 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0) 2026-02-28 00:42:35.321778 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0) 2026-02-28 00:42:35.321788 | orchestrator | 2026-02-28 00:42:35.321797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:42:35.321807 | orchestrator | Saturday 28 February 2026 00:42:30 +0000 (0:00:00.465) 0:00:32.666 ***** 2026-02-28 00:42:35.321817 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:42:35.321826 | orchestrator | 2026-02-28 00:42:35.321836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.321863 | orchestrator | Saturday 28 February 2026 00:42:30 +0000 (0:00:00.353) 0:00:33.019 ***** 2026-02-28 00:42:35.321874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:42:35.321884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:42:35.321894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:42:35.321904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:42:35.321919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:42:35.321929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:42:35.321939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:42:35.321949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:42:35.321959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-28 00:42:35.321968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:42:35.321978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:42:35.321988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:42:35.321997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:42:35.322007 | orchestrator | 2026-02-28 00:42:35.322100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322114 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.416) 0:00:33.436 ***** 2026-02-28 00:42:35.322123 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322133 | orchestrator | 2026-02-28 00:42:35.322143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322153 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.201) 0:00:33.637 ***** 2026-02-28 00:42:35.322162 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322172 | orchestrator | 2026-02-28 00:42:35.322182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322191 | orchestrator | Saturday 28 February 2026 00:42:31 +0000 (0:00:00.245) 0:00:33.883 ***** 2026-02-28 00:42:35.322201 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322210 | orchestrator | 2026-02-28 00:42:35.322220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322230 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.237) 0:00:34.121 ***** 2026-02-28 00:42:35.322240 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322249 | orchestrator | 2026-02-28 00:42:35.322259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322269 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.197) 0:00:34.319 ***** 2026-02-28 00:42:35.322278 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322301 | orchestrator | 2026-02-28 00:42:35.322311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322321 | orchestrator | Saturday 28 February 2026 00:42:32 +0000 (0:00:00.230) 0:00:34.550 ***** 2026-02-28 00:42:35.322341 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322350 | orchestrator | 2026-02-28 00:42:35.322360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322370 | orchestrator | Saturday 28 February 2026 00:42:33 +0000 (0:00:00.806) 0:00:35.356 ***** 2026-02-28 00:42:35.322379 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322388 | orchestrator | 2026-02-28 00:42:35.322398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322408 | orchestrator | Saturday 28 February 2026 00:42:33 +0000 (0:00:00.243) 0:00:35.600 ***** 2026-02-28 00:42:35.322417 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322427 | orchestrator | 2026-02-28 00:42:35.322436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322446 | orchestrator | Saturday 28 February 2026 00:42:33 +0000 (0:00:00.225) 0:00:35.826 ***** 2026-02-28 00:42:35.322456 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-28 00:42:35.322476 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-28 00:42:35.322486 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-28 00:42:35.322496 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-28 00:42:35.322505 | orchestrator | 2026-02-28 00:42:35.322515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322525 | orchestrator | Saturday 28 February 2026 00:42:34 +0000 (0:00:00.693) 0:00:36.520 ***** 2026-02-28 00:42:35.322535 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322544 | orchestrator | 2026-02-28 00:42:35.322554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322563 | orchestrator | Saturday 28 February 2026 00:42:34 +0000 (0:00:00.230) 0:00:36.750 ***** 2026-02-28 00:42:35.322573 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322582 | orchestrator | 2026-02-28 00:42:35.322591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322601 | orchestrator | Saturday 28 February 2026 00:42:34 +0000 (0:00:00.220) 0:00:36.971 ***** 2026-02-28 00:42:35.322611 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322620 | orchestrator | 2026-02-28 00:42:35.322630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:42:35.322639 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.225) 0:00:37.196 ***** 2026-02-28 00:42:35.322649 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:35.322658 | orchestrator | 2026-02-28 00:42:35.322675 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-28 00:42:39.965701 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.216) 0:00:37.413 ***** 2026-02-28 00:42:39.965805 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-28 00:42:39.965822 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-28 00:42:39.965836 | orchestrator | 2026-02-28 00:42:39.965850 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-28 00:42:39.965870 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.233) 0:00:37.647 ***** 2026-02-28 00:42:39.965889 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.965908 | orchestrator | 2026-02-28 00:42:39.965920 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-28 00:42:39.965931 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.156) 0:00:37.804 ***** 2026-02-28 00:42:39.965974 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.965986 | orchestrator | 2026-02-28 00:42:39.965998 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-28 00:42:39.966009 | orchestrator | Saturday 28 February 2026 00:42:35 +0000 (0:00:00.158) 0:00:37.962 ***** 2026-02-28 00:42:39.966179 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.966201 | orchestrator | 2026-02-28 00:42:39.966222 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-28 00:42:39.966242 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.389) 0:00:38.351 ***** 2026-02-28 00:42:39.966262 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:39.966304 | orchestrator | 2026-02-28 00:42:39.966327 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-28 00:42:39.966348 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.151) 0:00:38.503 ***** 2026-02-28 00:42:39.966382 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e9a8b5b-9130-5945-a817-2135e2f57de8'}}) 2026-02-28 00:42:39.966414 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '160cc444-1ede-5c9f-8076-16a146e97f10'}}) 2026-02-28 00:42:39.966434 | orchestrator | 2026-02-28 00:42:39.966451 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-28 00:42:39.966471 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.175) 0:00:38.678 ***** 2026-02-28 00:42:39.966490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e9a8b5b-9130-5945-a817-2135e2f57de8'}})  2026-02-28 00:42:39.966541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '160cc444-1ede-5c9f-8076-16a146e97f10'}})  2026-02-28 00:42:39.966562 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.966582 | orchestrator | 2026-02-28 00:42:39.966602 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-28 00:42:39.966620 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.185) 0:00:38.864 ***** 2026-02-28 00:42:39.966640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e9a8b5b-9130-5945-a817-2135e2f57de8'}})  2026-02-28 00:42:39.966659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '160cc444-1ede-5c9f-8076-16a146e97f10'}})  2026-02-28 00:42:39.966678 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.966697 | orchestrator | 2026-02-28 00:42:39.966715 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-28 00:42:39.966734 | orchestrator | Saturday 28 February 2026 00:42:36 +0000 (0:00:00.154) 0:00:39.018 ***** 2026-02-28 00:42:39.966752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e9a8b5b-9130-5945-a817-2135e2f57de8'}})  2026-02-28 00:42:39.966770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '160cc444-1ede-5c9f-8076-16a146e97f10'}})  2026-02-28 00:42:39.966789 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.966806 | orchestrator | 2026-02-28 00:42:39.966825 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-28 00:42:39.966843 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.172) 0:00:39.191 ***** 2026-02-28 00:42:39.966862 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:39.966882 | orchestrator | 2026-02-28 00:42:39.966900 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-28 00:42:39.966917 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.195) 0:00:39.387 ***** 2026-02-28 00:42:39.966934 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:42:39.966952 | orchestrator | 2026-02-28 00:42:39.966972 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-28 00:42:39.966988 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.179) 0:00:39.566 ***** 2026-02-28 00:42:39.967003 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.967022 | orchestrator | 2026-02-28 00:42:39.967066 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-28 00:42:39.967083 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.164) 0:00:39.731 ***** 2026-02-28 00:42:39.967098 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.967115 | orchestrator | 2026-02-28 00:42:39.967133 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-28 00:42:39.967152 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.139) 0:00:39.870 ***** 2026-02-28 00:42:39.967169 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.967186 | orchestrator | 2026-02-28 00:42:39.967203 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-28 00:42:39.967221 | orchestrator | Saturday 28 February 2026 00:42:37 +0000 (0:00:00.159) 0:00:40.030 ***** 2026-02-28 00:42:39.967239 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:42:39.967256 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:39.967275 | orchestrator |  "sdb": { 2026-02-28 00:42:39.967323 | orchestrator |  "osd_lvm_uuid": "4e9a8b5b-9130-5945-a817-2135e2f57de8" 2026-02-28 00:42:39.967343 | orchestrator |  }, 2026-02-28 00:42:39.967360 | orchestrator |  "sdc": { 2026-02-28 00:42:39.967378 | orchestrator |  "osd_lvm_uuid": "160cc444-1ede-5c9f-8076-16a146e97f10" 2026-02-28 00:42:39.967396 | orchestrator |  } 2026-02-28 00:42:39.967415 | orchestrator |  } 2026-02-28 00:42:39.967434 | orchestrator | } 2026-02-28 00:42:39.967452 | orchestrator | 2026-02-28 00:42:39.967544 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-28 00:42:39.967567 | orchestrator | Saturday 28 February 2026 00:42:38 +0000 (0:00:00.146) 0:00:40.177 ***** 2026-02-28 00:42:39.967586 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.967605 | orchestrator | 2026-02-28 00:42:39.967625 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-28 00:42:39.967644 | orchestrator | Saturday 28 February 2026 00:42:38 +0000 (0:00:00.390) 0:00:40.568 ***** 2026-02-28 00:42:39.967665 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.967684 | orchestrator | 2026-02-28 00:42:39.967702 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-28 00:42:39.967721 | orchestrator | Saturday 28 February 2026 00:42:38 +0000 (0:00:00.137) 0:00:40.706 ***** 2026-02-28 00:42:39.967740 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:42:39.967759 | orchestrator | 2026-02-28 00:42:39.967776 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-28 00:42:39.967794 | orchestrator | Saturday 28 February 2026 00:42:38 +0000 (0:00:00.134) 0:00:40.840 ***** 2026-02-28 00:42:39.967813 | orchestrator | changed: [testbed-node-5] => { 2026-02-28 00:42:39.967832 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-28 00:42:39.967852 | orchestrator |  "ceph_osd_devices": { 2026-02-28 00:42:39.967872 | orchestrator |  "sdb": { 2026-02-28 00:42:39.967893 | orchestrator |  "osd_lvm_uuid": "4e9a8b5b-9130-5945-a817-2135e2f57de8" 2026-02-28 00:42:39.967913 | orchestrator |  }, 2026-02-28 00:42:39.967933 | orchestrator |  "sdc": { 2026-02-28 00:42:39.967953 | orchestrator |  "osd_lvm_uuid": "160cc444-1ede-5c9f-8076-16a146e97f10" 2026-02-28 00:42:39.967974 | orchestrator |  } 2026-02-28 00:42:39.967993 | orchestrator |  }, 2026-02-28 00:42:39.968010 | orchestrator |  "lvm_volumes": [ 2026-02-28 00:42:39.968029 | orchestrator |  { 2026-02-28 00:42:39.968079 | orchestrator |  "data": "osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8", 2026-02-28 00:42:39.968098 | orchestrator |  "data_vg": "ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8" 2026-02-28 00:42:39.968115 | orchestrator |  }, 2026-02-28 00:42:39.968140 | orchestrator |  { 2026-02-28 00:42:39.968160 | orchestrator |  "data": "osd-block-160cc444-1ede-5c9f-8076-16a146e97f10", 2026-02-28 00:42:39.968179 | orchestrator |  "data_vg": "ceph-160cc444-1ede-5c9f-8076-16a146e97f10" 2026-02-28 00:42:39.968198 | orchestrator |  } 2026-02-28 00:42:39.968217 | orchestrator |  ] 2026-02-28 00:42:39.968235 | orchestrator |  } 2026-02-28 00:42:39.968253 | orchestrator | } 2026-02-28 00:42:39.968271 | orchestrator | 2026-02-28 00:42:39.968288 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-28 00:42:39.968307 | orchestrator | Saturday 28 February 2026 00:42:38 +0000 (0:00:00.231) 0:00:41.072 ***** 2026-02-28 00:42:39.968326 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:42:39.968345 | orchestrator | 2026-02-28 00:42:39.968364 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:42:39.968384 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:42:39.968405 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:42:39.968424 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 00:42:39.968442 | orchestrator | 2026-02-28 00:42:39.968463 | orchestrator | 2026-02-28 00:42:39.968483 | orchestrator | 2026-02-28 00:42:39.968502 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:42:39.968520 | orchestrator | Saturday 28 February 2026 00:42:39 +0000 (0:00:00.973) 0:00:42.045 ***** 2026-02-28 00:42:39.968559 | orchestrator | =============================================================================== 2026-02-28 00:42:39.968580 | orchestrator | Write configuration file ------------------------------------------------ 3.74s 2026-02-28 00:42:39.968598 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-02-28 00:42:39.968633 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.17s 2026-02-28 00:42:39.968652 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-02-28 00:42:39.968670 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2026-02-28 00:42:39.968688 | orchestrator | Print configuration data ------------------------------------------------ 0.91s 2026-02-28 00:42:39.968707 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-02-28 00:42:39.968725 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-02-28 00:42:39.968742 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-02-28 00:42:39.968761 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-28 00:42:39.968780 | orchestrator | Print WAL devices ------------------------------------------------------- 0.72s 2026-02-28 00:42:39.968800 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-02-28 00:42:39.968819 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.69s 2026-02-28 00:42:39.968859 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.68s 2026-02-28 00:42:40.327611 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-02-28 00:42:40.327741 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-02-28 00:42:40.327769 | orchestrator | Set DB devices config data ---------------------------------------------- 0.65s 2026-02-28 00:42:40.327789 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-28 00:42:40.327810 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-02-28 00:42:40.327830 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.60s 2026-02-28 00:43:02.914528 | orchestrator | 2026-02-28 00:43:02 | INFO  | Task 3545d3f5-7ca7-4545-92a0-990f69850ab3 (sync inventory) is running in background. Output coming soon. 2026-02-28 00:43:31.653385 | orchestrator | 2026-02-28 00:43:04 | INFO  | Starting group_vars file reorganization 2026-02-28 00:43:31.653524 | orchestrator | 2026-02-28 00:43:04 | INFO  | Moved 0 file(s) to their respective directories 2026-02-28 00:43:31.653551 | orchestrator | 2026-02-28 00:43:04 | INFO  | Group_vars file reorganization completed 2026-02-28 00:43:31.653571 | orchestrator | 2026-02-28 00:43:07 | INFO  | Starting variable preparation from inventory 2026-02-28 00:43:31.653592 | orchestrator | 2026-02-28 00:43:10 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-28 00:43:31.653611 | orchestrator | 2026-02-28 00:43:10 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-28 00:43:31.653652 | orchestrator | 2026-02-28 00:43:10 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-28 00:43:31.653673 | orchestrator | 2026-02-28 00:43:10 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-28 00:43:31.653691 | orchestrator | 2026-02-28 00:43:10 | INFO  | Variable preparation completed 2026-02-28 00:43:31.653711 | orchestrator | 2026-02-28 00:43:12 | INFO  | Starting inventory overwrite handling 2026-02-28 00:43:31.653730 | orchestrator | 2026-02-28 00:43:12 | INFO  | Handling group overwrites in 99-overwrite 2026-02-28 00:43:31.653749 | orchestrator | 2026-02-28 00:43:12 | INFO  | Removing group frr:children from 60-generic 2026-02-28 00:43:31.653799 | orchestrator | 2026-02-28 00:43:12 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-28 00:43:31.653818 | orchestrator | 2026-02-28 00:43:12 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-28 00:43:31.653837 | orchestrator | 2026-02-28 00:43:12 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-28 00:43:31.653856 | orchestrator | 2026-02-28 00:43:12 | INFO  | Handling group overwrites in 20-roles 2026-02-28 00:43:31.653875 | orchestrator | 2026-02-28 00:43:12 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-28 00:43:31.653894 | orchestrator | 2026-02-28 00:43:12 | INFO  | Removed 5 group(s) in total 2026-02-28 00:43:31.653913 | orchestrator | 2026-02-28 00:43:12 | INFO  | Inventory overwrite handling completed 2026-02-28 00:43:31.653934 | orchestrator | 2026-02-28 00:43:13 | INFO  | Starting merge of inventory files 2026-02-28 00:43:31.653953 | orchestrator | 2026-02-28 00:43:13 | INFO  | Inventory files merged successfully 2026-02-28 00:43:31.653973 | orchestrator | 2026-02-28 00:43:19 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-28 00:43:31.653993 | orchestrator | 2026-02-28 00:43:30 | INFO  | Successfully wrote ClusterShell configuration 2026-02-28 00:43:31.654101 | orchestrator | [master 34cfba6] 2026-02-28-00-43 2026-02-28 00:43:31.654125 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-28 00:43:33.828182 | orchestrator | 2026-02-28 00:43:33 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-02-28 00:43:33.884450 | orchestrator | 2026-02-28 00:43:33 | INFO  | Task 4cedc442-a344-41e5-a962-1414ac2c152c (ceph-create-lvm-devices) was prepared for execution. 2026-02-28 00:43:33.884551 | orchestrator | 2026-02-28 00:43:33 | INFO  | It takes a moment until task 4cedc442-a344-41e5-a962-1414ac2c152c (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-28 00:43:48.230847 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:43:48.231005 | orchestrator | 2.16.14 2026-02-28 00:43:48.231057 | orchestrator | 2026-02-28 00:43:48.231072 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:43:48.231089 | orchestrator | 2026-02-28 00:43:48.231108 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:43:48.231128 | orchestrator | Saturday 28 February 2026 00:43:39 +0000 (0:00:00.416) 0:00:00.416 ***** 2026-02-28 00:43:48.231147 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-28 00:43:48.231168 | orchestrator | 2026-02-28 00:43:48.231189 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:43:48.231208 | orchestrator | Saturday 28 February 2026 00:43:40 +0000 (0:00:00.313) 0:00:00.729 ***** 2026-02-28 00:43:48.231227 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:43:48.231239 | orchestrator | 2026-02-28 00:43:48.231251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231262 | orchestrator | Saturday 28 February 2026 00:43:40 +0000 (0:00:00.245) 0:00:00.975 ***** 2026-02-28 00:43:48.231274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:43:48.231286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:43:48.231297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:43:48.231308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:43:48.231319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:43:48.231330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:43:48.231342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:43:48.231383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:43:48.231395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-28 00:43:48.231406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:43:48.231417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:43:48.231428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:43:48.231440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:43:48.231451 | orchestrator | 2026-02-28 00:43:48.231462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231474 | orchestrator | Saturday 28 February 2026 00:43:41 +0000 (0:00:00.786) 0:00:01.762 ***** 2026-02-28 00:43:48.231485 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231497 | orchestrator | 2026-02-28 00:43:48.231508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231519 | orchestrator | Saturday 28 February 2026 00:43:41 +0000 (0:00:00.193) 0:00:01.955 ***** 2026-02-28 00:43:48.231530 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231541 | orchestrator | 2026-02-28 00:43:48.231553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231564 | orchestrator | Saturday 28 February 2026 00:43:41 +0000 (0:00:00.189) 0:00:02.144 ***** 2026-02-28 00:43:48.231575 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231586 | orchestrator | 2026-02-28 00:43:48.231597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231608 | orchestrator | Saturday 28 February 2026 00:43:41 +0000 (0:00:00.180) 0:00:02.325 ***** 2026-02-28 00:43:48.231619 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231630 | orchestrator | 2026-02-28 00:43:48.231642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231653 | orchestrator | Saturday 28 February 2026 00:43:41 +0000 (0:00:00.212) 0:00:02.538 ***** 2026-02-28 00:43:48.231664 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231675 | orchestrator | 2026-02-28 00:43:48.231686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231719 | orchestrator | Saturday 28 February 2026 00:43:42 +0000 (0:00:00.211) 0:00:02.750 ***** 2026-02-28 00:43:48.231731 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231743 | orchestrator | 2026-02-28 00:43:48.231754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231765 | orchestrator | Saturday 28 February 2026 00:43:42 +0000 (0:00:00.221) 0:00:02.972 ***** 2026-02-28 00:43:48.231776 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231787 | orchestrator | 2026-02-28 00:43:48.231798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231809 | orchestrator | Saturday 28 February 2026 00:43:42 +0000 (0:00:00.235) 0:00:03.207 ***** 2026-02-28 00:43:48.231820 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.231832 | orchestrator | 2026-02-28 00:43:48.231843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231855 | orchestrator | Saturday 28 February 2026 00:43:42 +0000 (0:00:00.234) 0:00:03.442 ***** 2026-02-28 00:43:48.231866 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c) 2026-02-28 00:43:48.231879 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c) 2026-02-28 00:43:48.231891 | orchestrator | 2026-02-28 00:43:48.231911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.231952 | orchestrator | Saturday 28 February 2026 00:43:43 +0000 (0:00:00.504) 0:00:03.947 ***** 2026-02-28 00:43:48.231984 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185) 2026-02-28 00:43:48.231996 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185) 2026-02-28 00:43:48.232007 | orchestrator | 2026-02-28 00:43:48.232044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.232057 | orchestrator | Saturday 28 February 2026 00:43:43 +0000 (0:00:00.703) 0:00:04.650 ***** 2026-02-28 00:43:48.232068 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102) 2026-02-28 00:43:48.232079 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102) 2026-02-28 00:43:48.232090 | orchestrator | 2026-02-28 00:43:48.232101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.232112 | orchestrator | Saturday 28 February 2026 00:43:44 +0000 (0:00:00.902) 0:00:05.553 ***** 2026-02-28 00:43:48.232123 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9) 2026-02-28 00:43:48.232134 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9) 2026-02-28 00:43:48.232145 | orchestrator | 2026-02-28 00:43:48.232156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:43:48.232167 | orchestrator | Saturday 28 February 2026 00:43:45 +0000 (0:00:01.067) 0:00:06.620 ***** 2026-02-28 00:43:48.232178 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:43:48.232189 | orchestrator | 2026-02-28 00:43:48.232200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232211 | orchestrator | Saturday 28 February 2026 00:43:46 +0000 (0:00:00.366) 0:00:06.986 ***** 2026-02-28 00:43:48.232222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-28 00:43:48.232233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-28 00:43:48.232244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-28 00:43:48.232254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-28 00:43:48.232265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-28 00:43:48.232283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-28 00:43:48.232295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-28 00:43:48.232306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-28 00:43:48.232317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-28 00:43:48.232328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-28 00:43:48.232338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-28 00:43:48.232349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-28 00:43:48.232360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-28 00:43:48.232371 | orchestrator | 2026-02-28 00:43:48.232382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232393 | orchestrator | Saturday 28 February 2026 00:43:46 +0000 (0:00:00.428) 0:00:07.415 ***** 2026-02-28 00:43:48.232404 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232415 | orchestrator | 2026-02-28 00:43:48.232427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232437 | orchestrator | Saturday 28 February 2026 00:43:46 +0000 (0:00:00.229) 0:00:07.645 ***** 2026-02-28 00:43:48.232456 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232467 | orchestrator | 2026-02-28 00:43:48.232478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232490 | orchestrator | Saturday 28 February 2026 00:43:47 +0000 (0:00:00.215) 0:00:07.860 ***** 2026-02-28 00:43:48.232501 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232512 | orchestrator | 2026-02-28 00:43:48.232523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232534 | orchestrator | Saturday 28 February 2026 00:43:47 +0000 (0:00:00.205) 0:00:08.066 ***** 2026-02-28 00:43:48.232545 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232555 | orchestrator | 2026-02-28 00:43:48.232566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232577 | orchestrator | Saturday 28 February 2026 00:43:47 +0000 (0:00:00.212) 0:00:08.278 ***** 2026-02-28 00:43:48.232588 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232599 | orchestrator | 2026-02-28 00:43:48.232610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232621 | orchestrator | Saturday 28 February 2026 00:43:47 +0000 (0:00:00.207) 0:00:08.486 ***** 2026-02-28 00:43:48.232632 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232643 | orchestrator | 2026-02-28 00:43:48.232654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:48.232665 | orchestrator | Saturday 28 February 2026 00:43:48 +0000 (0:00:00.183) 0:00:08.670 ***** 2026-02-28 00:43:48.232676 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:48.232687 | orchestrator | 2026-02-28 00:43:48.232706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:56.554308 | orchestrator | Saturday 28 February 2026 00:43:48 +0000 (0:00:00.202) 0:00:08.872 ***** 2026-02-28 00:43:56.554431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554451 | orchestrator | 2026-02-28 00:43:56.554467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:56.554482 | orchestrator | Saturday 28 February 2026 00:43:48 +0000 (0:00:00.234) 0:00:09.106 ***** 2026-02-28 00:43:56.554496 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-28 00:43:56.554510 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-28 00:43:56.554524 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-28 00:43:56.554537 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-28 00:43:56.554551 | orchestrator | 2026-02-28 00:43:56.554565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:56.554574 | orchestrator | Saturday 28 February 2026 00:43:49 +0000 (0:00:01.086) 0:00:10.193 ***** 2026-02-28 00:43:56.554582 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554590 | orchestrator | 2026-02-28 00:43:56.554598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:56.554607 | orchestrator | Saturday 28 February 2026 00:43:49 +0000 (0:00:00.220) 0:00:10.414 ***** 2026-02-28 00:43:56.554615 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554623 | orchestrator | 2026-02-28 00:43:56.554631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:56.554639 | orchestrator | Saturday 28 February 2026 00:43:49 +0000 (0:00:00.225) 0:00:10.639 ***** 2026-02-28 00:43:56.554647 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554655 | orchestrator | 2026-02-28 00:43:56.554663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:43:56.554670 | orchestrator | Saturday 28 February 2026 00:43:50 +0000 (0:00:00.209) 0:00:10.848 ***** 2026-02-28 00:43:56.554677 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554684 | orchestrator | 2026-02-28 00:43:56.554691 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:43:56.554697 | orchestrator | Saturday 28 February 2026 00:43:50 +0000 (0:00:00.210) 0:00:11.059 ***** 2026-02-28 00:43:56.554704 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554730 | orchestrator | 2026-02-28 00:43:56.554737 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:43:56.554744 | orchestrator | Saturday 28 February 2026 00:43:50 +0000 (0:00:00.154) 0:00:11.214 ***** 2026-02-28 00:43:56.554751 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e2365387-977d-5b6c-ac86-7516065bddb2'}}) 2026-02-28 00:43:56.554759 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c221fe87-4514-5691-85ae-4cf2e32a6a79'}}) 2026-02-28 00:43:56.554766 | orchestrator | 2026-02-28 00:43:56.554773 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:43:56.554780 | orchestrator | Saturday 28 February 2026 00:43:50 +0000 (0:00:00.203) 0:00:11.418 ***** 2026-02-28 00:43:56.554788 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'}) 2026-02-28 00:43:56.554796 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'}) 2026-02-28 00:43:56.554803 | orchestrator | 2026-02-28 00:43:56.554810 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:43:56.554817 | orchestrator | Saturday 28 February 2026 00:43:52 +0000 (0:00:01.948) 0:00:13.366 ***** 2026-02-28 00:43:56.554825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.554834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.554843 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554850 | orchestrator | 2026-02-28 00:43:56.554859 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:43:56.554866 | orchestrator | Saturday 28 February 2026 00:43:52 +0000 (0:00:00.197) 0:00:13.564 ***** 2026-02-28 00:43:56.554874 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'}) 2026-02-28 00:43:56.554882 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'}) 2026-02-28 00:43:56.554890 | orchestrator | 2026-02-28 00:43:56.554911 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:43:56.554920 | orchestrator | Saturday 28 February 2026 00:43:54 +0000 (0:00:01.452) 0:00:15.016 ***** 2026-02-28 00:43:56.554927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.554935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.554943 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554951 | orchestrator | 2026-02-28 00:43:56.554958 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:43:56.554966 | orchestrator | Saturday 28 February 2026 00:43:54 +0000 (0:00:00.185) 0:00:15.202 ***** 2026-02-28 00:43:56.554988 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.554997 | orchestrator | 2026-02-28 00:43:56.555005 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:43:56.555034 | orchestrator | Saturday 28 February 2026 00:43:54 +0000 (0:00:00.153) 0:00:15.356 ***** 2026-02-28 00:43:56.555044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.555051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.555077 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555084 | orchestrator | 2026-02-28 00:43:56.555093 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:43:56.555100 | orchestrator | Saturday 28 February 2026 00:43:55 +0000 (0:00:00.386) 0:00:15.742 ***** 2026-02-28 00:43:56.555107 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555115 | orchestrator | 2026-02-28 00:43:56.555122 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:43:56.555130 | orchestrator | Saturday 28 February 2026 00:43:55 +0000 (0:00:00.157) 0:00:15.899 ***** 2026-02-28 00:43:56.555137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.555145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.555153 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555161 | orchestrator | 2026-02-28 00:43:56.555168 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:43:56.555176 | orchestrator | Saturday 28 February 2026 00:43:55 +0000 (0:00:00.164) 0:00:16.064 ***** 2026-02-28 00:43:56.555183 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555191 | orchestrator | 2026-02-28 00:43:56.555197 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:43:56.555204 | orchestrator | Saturday 28 February 2026 00:43:55 +0000 (0:00:00.160) 0:00:16.224 ***** 2026-02-28 00:43:56.555211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.555222 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.555229 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555236 | orchestrator | 2026-02-28 00:43:56.555242 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:43:56.555249 | orchestrator | Saturday 28 February 2026 00:43:55 +0000 (0:00:00.184) 0:00:16.409 ***** 2026-02-28 00:43:56.555256 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:43:56.555263 | orchestrator | 2026-02-28 00:43:56.555270 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:43:56.555276 | orchestrator | Saturday 28 February 2026 00:43:55 +0000 (0:00:00.144) 0:00:16.553 ***** 2026-02-28 00:43:56.555283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.555290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.555297 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555304 | orchestrator | 2026-02-28 00:43:56.555311 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:43:56.555317 | orchestrator | Saturday 28 February 2026 00:43:56 +0000 (0:00:00.190) 0:00:16.744 ***** 2026-02-28 00:43:56.555324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.555331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.555338 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555344 | orchestrator | 2026-02-28 00:43:56.555351 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:43:56.555363 | orchestrator | Saturday 28 February 2026 00:43:56 +0000 (0:00:00.158) 0:00:16.902 ***** 2026-02-28 00:43:56.555370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:43:56.555376 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:43:56.555383 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555390 | orchestrator | 2026-02-28 00:43:56.555397 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:43:56.555403 | orchestrator | Saturday 28 February 2026 00:43:56 +0000 (0:00:00.155) 0:00:17.058 ***** 2026-02-28 00:43:56.555410 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:43:56.555417 | orchestrator | 2026-02-28 00:43:56.555423 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:43:56.555435 | orchestrator | Saturday 28 February 2026 00:43:56 +0000 (0:00:00.143) 0:00:17.202 ***** 2026-02-28 00:44:03.159580 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159635 | orchestrator | 2026-02-28 00:44:03.159641 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:44:03.159646 | orchestrator | Saturday 28 February 2026 00:43:56 +0000 (0:00:00.137) 0:00:17.339 ***** 2026-02-28 00:44:03.159650 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159654 | orchestrator | 2026-02-28 00:44:03.159658 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:44:03.159661 | orchestrator | Saturday 28 February 2026 00:43:56 +0000 (0:00:00.127) 0:00:17.467 ***** 2026-02-28 00:44:03.159665 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:03.159679 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:44:03.159683 | orchestrator | } 2026-02-28 00:44:03.159687 | orchestrator | 2026-02-28 00:44:03.159695 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:44:03.159699 | orchestrator | Saturday 28 February 2026 00:43:57 +0000 (0:00:00.349) 0:00:17.816 ***** 2026-02-28 00:44:03.159703 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:03.159708 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:44:03.159712 | orchestrator | } 2026-02-28 00:44:03.159716 | orchestrator | 2026-02-28 00:44:03.159720 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:44:03.159724 | orchestrator | Saturday 28 February 2026 00:43:57 +0000 (0:00:00.145) 0:00:17.962 ***** 2026-02-28 00:44:03.159728 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:03.159732 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:44:03.159736 | orchestrator | } 2026-02-28 00:44:03.159740 | orchestrator | 2026-02-28 00:44:03.159743 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:44:03.159747 | orchestrator | Saturday 28 February 2026 00:43:57 +0000 (0:00:00.163) 0:00:18.125 ***** 2026-02-28 00:44:03.159751 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:03.159755 | orchestrator | 2026-02-28 00:44:03.159759 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:44:03.159763 | orchestrator | Saturday 28 February 2026 00:43:58 +0000 (0:00:00.667) 0:00:18.793 ***** 2026-02-28 00:44:03.159767 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:03.159771 | orchestrator | 2026-02-28 00:44:03.159775 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:44:03.159778 | orchestrator | Saturday 28 February 2026 00:43:58 +0000 (0:00:00.495) 0:00:19.288 ***** 2026-02-28 00:44:03.159782 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:03.159786 | orchestrator | 2026-02-28 00:44:03.159790 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:44:03.159794 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.488) 0:00:19.777 ***** 2026-02-28 00:44:03.159798 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:03.159802 | orchestrator | 2026-02-28 00:44:03.159817 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:44:03.159822 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.130) 0:00:19.908 ***** 2026-02-28 00:44:03.159825 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159829 | orchestrator | 2026-02-28 00:44:03.159833 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:44:03.159837 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.115) 0:00:20.023 ***** 2026-02-28 00:44:03.159841 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159845 | orchestrator | 2026-02-28 00:44:03.159849 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:44:03.159852 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.107) 0:00:20.131 ***** 2026-02-28 00:44:03.159856 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:03.159860 | orchestrator |  "vgs_report": { 2026-02-28 00:44:03.159864 | orchestrator |  "vg": [] 2026-02-28 00:44:03.159868 | orchestrator |  } 2026-02-28 00:44:03.159872 | orchestrator | } 2026-02-28 00:44:03.159876 | orchestrator | 2026-02-28 00:44:03.159880 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:44:03.159884 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.181) 0:00:20.313 ***** 2026-02-28 00:44:03.159887 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159891 | orchestrator | 2026-02-28 00:44:03.159895 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:44:03.159899 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.128) 0:00:20.441 ***** 2026-02-28 00:44:03.159902 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159906 | orchestrator | 2026-02-28 00:44:03.159910 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:44:03.159914 | orchestrator | Saturday 28 February 2026 00:43:59 +0000 (0:00:00.140) 0:00:20.581 ***** 2026-02-28 00:44:03.159918 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159922 | orchestrator | 2026-02-28 00:44:03.159925 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:44:03.159929 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.366) 0:00:20.948 ***** 2026-02-28 00:44:03.159933 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159937 | orchestrator | 2026-02-28 00:44:03.159941 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:44:03.159944 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.153) 0:00:21.102 ***** 2026-02-28 00:44:03.159948 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159952 | orchestrator | 2026-02-28 00:44:03.159956 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:44:03.159960 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.134) 0:00:21.236 ***** 2026-02-28 00:44:03.159963 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159967 | orchestrator | 2026-02-28 00:44:03.159971 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:44:03.159975 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.150) 0:00:21.387 ***** 2026-02-28 00:44:03.159979 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.159982 | orchestrator | 2026-02-28 00:44:03.159986 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:44:03.159990 | orchestrator | Saturday 28 February 2026 00:44:00 +0000 (0:00:00.132) 0:00:21.520 ***** 2026-02-28 00:44:03.160001 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160005 | orchestrator | 2026-02-28 00:44:03.160009 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:44:03.160046 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.147) 0:00:21.667 ***** 2026-02-28 00:44:03.160050 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160054 | orchestrator | 2026-02-28 00:44:03.160057 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:44:03.160065 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.143) 0:00:21.811 ***** 2026-02-28 00:44:03.160068 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160072 | orchestrator | 2026-02-28 00:44:03.160076 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:44:03.160080 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.144) 0:00:21.956 ***** 2026-02-28 00:44:03.160083 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160087 | orchestrator | 2026-02-28 00:44:03.160100 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:44:03.160104 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.154) 0:00:22.110 ***** 2026-02-28 00:44:03.160108 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160112 | orchestrator | 2026-02-28 00:44:03.160116 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:44:03.160119 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.145) 0:00:22.255 ***** 2026-02-28 00:44:03.160123 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160127 | orchestrator | 2026-02-28 00:44:03.160131 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:44:03.160135 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.134) 0:00:22.390 ***** 2026-02-28 00:44:03.160138 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160142 | orchestrator | 2026-02-28 00:44:03.160146 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:44:03.160150 | orchestrator | Saturday 28 February 2026 00:44:01 +0000 (0:00:00.155) 0:00:22.545 ***** 2026-02-28 00:44:03.160154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:03.160159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:03.160163 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160167 | orchestrator | 2026-02-28 00:44:03.160170 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:44:03.160176 | orchestrator | Saturday 28 February 2026 00:44:02 +0000 (0:00:00.477) 0:00:23.023 ***** 2026-02-28 00:44:03.160181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:03.160186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:03.160190 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160195 | orchestrator | 2026-02-28 00:44:03.160199 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:44:03.160203 | orchestrator | Saturday 28 February 2026 00:44:02 +0000 (0:00:00.187) 0:00:23.210 ***** 2026-02-28 00:44:03.160208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:03.160213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:03.160217 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160221 | orchestrator | 2026-02-28 00:44:03.160226 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:44:03.160230 | orchestrator | Saturday 28 February 2026 00:44:02 +0000 (0:00:00.163) 0:00:23.374 ***** 2026-02-28 00:44:03.160235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:03.160239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:03.160246 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160251 | orchestrator | 2026-02-28 00:44:03.160255 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:44:03.160259 | orchestrator | Saturday 28 February 2026 00:44:02 +0000 (0:00:00.166) 0:00:23.541 ***** 2026-02-28 00:44:03.160264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:03.160268 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:03.160273 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:03.160277 | orchestrator | 2026-02-28 00:44:03.160281 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:44:03.160286 | orchestrator | Saturday 28 February 2026 00:44:03 +0000 (0:00:00.183) 0:00:23.724 ***** 2026-02-28 00:44:03.160293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:08.609244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:08.609357 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:08.609373 | orchestrator | 2026-02-28 00:44:08.609386 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:44:08.609399 | orchestrator | Saturday 28 February 2026 00:44:03 +0000 (0:00:00.210) 0:00:23.935 ***** 2026-02-28 00:44:08.609410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:08.609440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:08.609452 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:08.609474 | orchestrator | 2026-02-28 00:44:08.609485 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:44:08.609497 | orchestrator | Saturday 28 February 2026 00:44:03 +0000 (0:00:00.161) 0:00:24.097 ***** 2026-02-28 00:44:08.609508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:08.609519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:08.609531 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:08.609542 | orchestrator | 2026-02-28 00:44:08.609553 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:44:08.609564 | orchestrator | Saturday 28 February 2026 00:44:03 +0000 (0:00:00.186) 0:00:24.283 ***** 2026-02-28 00:44:08.609575 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:08.609587 | orchestrator | 2026-02-28 00:44:08.609599 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:44:08.609610 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.512) 0:00:24.796 ***** 2026-02-28 00:44:08.609621 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:08.609632 | orchestrator | 2026-02-28 00:44:08.609643 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:44:08.609672 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.501) 0:00:25.297 ***** 2026-02-28 00:44:08.609684 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:44:08.609695 | orchestrator | 2026-02-28 00:44:08.609706 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:44:08.609717 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.167) 0:00:25.465 ***** 2026-02-28 00:44:08.609750 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'vg_name': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'}) 2026-02-28 00:44:08.609763 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'vg_name': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'}) 2026-02-28 00:44:08.609774 | orchestrator | 2026-02-28 00:44:08.609785 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:44:08.609796 | orchestrator | Saturday 28 February 2026 00:44:04 +0000 (0:00:00.168) 0:00:25.633 ***** 2026-02-28 00:44:08.609807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:08.609819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:08.609830 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:08.609841 | orchestrator | 2026-02-28 00:44:08.609852 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:44:08.609864 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.371) 0:00:26.005 ***** 2026-02-28 00:44:08.609875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:08.609886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:08.609897 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:08.609908 | orchestrator | 2026-02-28 00:44:08.609919 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:44:08.609930 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.167) 0:00:26.172 ***** 2026-02-28 00:44:08.609941 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'})  2026-02-28 00:44:08.609953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'})  2026-02-28 00:44:08.609964 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:44:08.609974 | orchestrator | 2026-02-28 00:44:08.609985 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:44:08.609996 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.159) 0:00:26.332 ***** 2026-02-28 00:44:08.610110 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 00:44:08.610126 | orchestrator |  "lvm_report": { 2026-02-28 00:44:08.610138 | orchestrator |  "lv": [ 2026-02-28 00:44:08.610149 | orchestrator |  { 2026-02-28 00:44:08.610160 | orchestrator |  "lv_name": "osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79", 2026-02-28 00:44:08.610172 | orchestrator |  "vg_name": "ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79" 2026-02-28 00:44:08.610217 | orchestrator |  }, 2026-02-28 00:44:08.610229 | orchestrator |  { 2026-02-28 00:44:08.610241 | orchestrator |  "lv_name": "osd-block-e2365387-977d-5b6c-ac86-7516065bddb2", 2026-02-28 00:44:08.610251 | orchestrator |  "vg_name": "ceph-e2365387-977d-5b6c-ac86-7516065bddb2" 2026-02-28 00:44:08.610262 | orchestrator |  } 2026-02-28 00:44:08.610273 | orchestrator |  ], 2026-02-28 00:44:08.610284 | orchestrator |  "pv": [ 2026-02-28 00:44:08.610295 | orchestrator |  { 2026-02-28 00:44:08.610306 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:44:08.610317 | orchestrator |  "vg_name": "ceph-e2365387-977d-5b6c-ac86-7516065bddb2" 2026-02-28 00:44:08.610327 | orchestrator |  }, 2026-02-28 00:44:08.610338 | orchestrator |  { 2026-02-28 00:44:08.610359 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:44:08.610370 | orchestrator |  "vg_name": "ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79" 2026-02-28 00:44:08.610381 | orchestrator |  } 2026-02-28 00:44:08.610393 | orchestrator |  ] 2026-02-28 00:44:08.610404 | orchestrator |  } 2026-02-28 00:44:08.610415 | orchestrator | } 2026-02-28 00:44:08.610426 | orchestrator | 2026-02-28 00:44:08.610437 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:08.610448 | orchestrator | 2026-02-28 00:44:08.610459 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:08.610470 | orchestrator | Saturday 28 February 2026 00:44:05 +0000 (0:00:00.294) 0:00:26.627 ***** 2026-02-28 00:44:08.610481 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:08.610492 | orchestrator | 2026-02-28 00:44:08.610503 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:08.610514 | orchestrator | Saturday 28 February 2026 00:44:06 +0000 (0:00:00.248) 0:00:26.875 ***** 2026-02-28 00:44:08.610525 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:08.610536 | orchestrator | 2026-02-28 00:44:08.610547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610558 | orchestrator | Saturday 28 February 2026 00:44:06 +0000 (0:00:00.244) 0:00:27.120 ***** 2026-02-28 00:44:08.610569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:44:08.610580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:44:08.610591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:44:08.610602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:44:08.610613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:44:08.610623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:44:08.610634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:44:08.610645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:44:08.610656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-28 00:44:08.610667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:44:08.610677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:44:08.610688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:44:08.610699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:44:08.610709 | orchestrator | 2026-02-28 00:44:08.610720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610731 | orchestrator | Saturday 28 February 2026 00:44:06 +0000 (0:00:00.428) 0:00:27.548 ***** 2026-02-28 00:44:08.610742 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:08.610752 | orchestrator | 2026-02-28 00:44:08.610763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610782 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.224) 0:00:27.773 ***** 2026-02-28 00:44:08.610794 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:08.610805 | orchestrator | 2026-02-28 00:44:08.610816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610826 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.194) 0:00:27.968 ***** 2026-02-28 00:44:08.610837 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:08.610848 | orchestrator | 2026-02-28 00:44:08.610859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610877 | orchestrator | Saturday 28 February 2026 00:44:07 +0000 (0:00:00.664) 0:00:28.633 ***** 2026-02-28 00:44:08.610888 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:08.610898 | orchestrator | 2026-02-28 00:44:08.610909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610920 | orchestrator | Saturday 28 February 2026 00:44:08 +0000 (0:00:00.211) 0:00:28.844 ***** 2026-02-28 00:44:08.610931 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:08.610942 | orchestrator | 2026-02-28 00:44:08.610952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:08.610963 | orchestrator | Saturday 28 February 2026 00:44:08 +0000 (0:00:00.210) 0:00:29.055 ***** 2026-02-28 00:44:08.610974 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:08.610985 | orchestrator | 2026-02-28 00:44:08.611003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436128 | orchestrator | Saturday 28 February 2026 00:44:08 +0000 (0:00:00.202) 0:00:29.257 ***** 2026-02-28 00:44:20.436254 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.436281 | orchestrator | 2026-02-28 00:44:20.436302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436320 | orchestrator | Saturday 28 February 2026 00:44:08 +0000 (0:00:00.205) 0:00:29.463 ***** 2026-02-28 00:44:20.436338 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.436357 | orchestrator | 2026-02-28 00:44:20.436376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436395 | orchestrator | Saturday 28 February 2026 00:44:09 +0000 (0:00:00.200) 0:00:29.663 ***** 2026-02-28 00:44:20.436413 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc) 2026-02-28 00:44:20.436433 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc) 2026-02-28 00:44:20.436451 | orchestrator | 2026-02-28 00:44:20.436469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436487 | orchestrator | Saturday 28 February 2026 00:44:09 +0000 (0:00:00.406) 0:00:30.070 ***** 2026-02-28 00:44:20.436503 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d) 2026-02-28 00:44:20.436518 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d) 2026-02-28 00:44:20.436532 | orchestrator | 2026-02-28 00:44:20.436547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436564 | orchestrator | Saturday 28 February 2026 00:44:09 +0000 (0:00:00.486) 0:00:30.557 ***** 2026-02-28 00:44:20.436582 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd) 2026-02-28 00:44:20.436599 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd) 2026-02-28 00:44:20.436619 | orchestrator | 2026-02-28 00:44:20.436638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436658 | orchestrator | Saturday 28 February 2026 00:44:10 +0000 (0:00:00.450) 0:00:31.007 ***** 2026-02-28 00:44:20.436699 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0) 2026-02-28 00:44:20.436722 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0) 2026-02-28 00:44:20.436742 | orchestrator | 2026-02-28 00:44:20.436762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:20.436781 | orchestrator | Saturday 28 February 2026 00:44:11 +0000 (0:00:00.729) 0:00:31.737 ***** 2026-02-28 00:44:20.436801 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:44:20.436821 | orchestrator | 2026-02-28 00:44:20.436841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.436860 | orchestrator | Saturday 28 February 2026 00:44:11 +0000 (0:00:00.625) 0:00:32.362 ***** 2026-02-28 00:44:20.436907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-28 00:44:20.436929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-28 00:44:20.436948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-28 00:44:20.436965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-28 00:44:20.436983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-28 00:44:20.437001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-28 00:44:20.437050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-28 00:44:20.437069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-28 00:44:20.437087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-28 00:44:20.437105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-28 00:44:20.437123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-28 00:44:20.437141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-28 00:44:20.437159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-28 00:44:20.437178 | orchestrator | 2026-02-28 00:44:20.437194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437209 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.952) 0:00:33.314 ***** 2026-02-28 00:44:20.437226 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437245 | orchestrator | 2026-02-28 00:44:20.437263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437281 | orchestrator | Saturday 28 February 2026 00:44:12 +0000 (0:00:00.247) 0:00:33.561 ***** 2026-02-28 00:44:20.437298 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437317 | orchestrator | 2026-02-28 00:44:20.437336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437353 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.251) 0:00:33.813 ***** 2026-02-28 00:44:20.437372 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437389 | orchestrator | 2026-02-28 00:44:20.437431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437450 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.233) 0:00:34.046 ***** 2026-02-28 00:44:20.437468 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437487 | orchestrator | 2026-02-28 00:44:20.437505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437524 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.210) 0:00:34.256 ***** 2026-02-28 00:44:20.437542 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437560 | orchestrator | 2026-02-28 00:44:20.437578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437596 | orchestrator | Saturday 28 February 2026 00:44:13 +0000 (0:00:00.220) 0:00:34.477 ***** 2026-02-28 00:44:20.437614 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437632 | orchestrator | 2026-02-28 00:44:20.437650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437667 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.217) 0:00:34.694 ***** 2026-02-28 00:44:20.437686 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437704 | orchestrator | 2026-02-28 00:44:20.437722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437740 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.250) 0:00:34.945 ***** 2026-02-28 00:44:20.437772 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437791 | orchestrator | 2026-02-28 00:44:20.437809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437827 | orchestrator | Saturday 28 February 2026 00:44:14 +0000 (0:00:00.246) 0:00:35.192 ***** 2026-02-28 00:44:20.437844 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-28 00:44:20.437863 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-28 00:44:20.437881 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-28 00:44:20.437899 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-28 00:44:20.437917 | orchestrator | 2026-02-28 00:44:20.437935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.437954 | orchestrator | Saturday 28 February 2026 00:44:15 +0000 (0:00:00.913) 0:00:36.106 ***** 2026-02-28 00:44:20.437972 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.437990 | orchestrator | 2026-02-28 00:44:20.438098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.438127 | orchestrator | Saturday 28 February 2026 00:44:15 +0000 (0:00:00.219) 0:00:36.325 ***** 2026-02-28 00:44:20.438157 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.438176 | orchestrator | 2026-02-28 00:44:20.438194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.438211 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.731) 0:00:37.056 ***** 2026-02-28 00:44:20.438229 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.438246 | orchestrator | 2026-02-28 00:44:20.438264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:20.438283 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.202) 0:00:37.258 ***** 2026-02-28 00:44:20.438301 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.438319 | orchestrator | 2026-02-28 00:44:20.438338 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:44:20.438356 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.217) 0:00:37.476 ***** 2026-02-28 00:44:20.438374 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.438392 | orchestrator | 2026-02-28 00:44:20.438409 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:44:20.438426 | orchestrator | Saturday 28 February 2026 00:44:16 +0000 (0:00:00.141) 0:00:37.617 ***** 2026-02-28 00:44:20.438442 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}}) 2026-02-28 00:44:20.438460 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d8e79be-6c7a-5031-8b8d-1755de447a00'}}) 2026-02-28 00:44:20.438479 | orchestrator | 2026-02-28 00:44:20.438497 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:44:20.438513 | orchestrator | Saturday 28 February 2026 00:44:17 +0000 (0:00:00.209) 0:00:37.827 ***** 2026-02-28 00:44:20.438530 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}) 2026-02-28 00:44:20.438546 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'}) 2026-02-28 00:44:20.438561 | orchestrator | 2026-02-28 00:44:20.438576 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:44:20.438591 | orchestrator | Saturday 28 February 2026 00:44:18 +0000 (0:00:01.814) 0:00:39.641 ***** 2026-02-28 00:44:20.438606 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:20.438623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:20.438649 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:20.438664 | orchestrator | 2026-02-28 00:44:20.438678 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:44:20.438692 | orchestrator | Saturday 28 February 2026 00:44:19 +0000 (0:00:00.157) 0:00:39.798 ***** 2026-02-28 00:44:20.438704 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}) 2026-02-28 00:44:20.438732 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'}) 2026-02-28 00:44:26.311489 | orchestrator | 2026-02-28 00:44:26.311599 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:44:26.311618 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:01.378) 0:00:41.177 ***** 2026-02-28 00:44:26.311631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.311645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.311656 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.311669 | orchestrator | 2026-02-28 00:44:26.311680 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:44:26.311691 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:00.162) 0:00:41.340 ***** 2026-02-28 00:44:26.311702 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.311714 | orchestrator | 2026-02-28 00:44:26.311725 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:44:26.311736 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:00.138) 0:00:41.478 ***** 2026-02-28 00:44:26.311747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.311758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.311770 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.311780 | orchestrator | 2026-02-28 00:44:26.311791 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:44:26.311810 | orchestrator | Saturday 28 February 2026 00:44:20 +0000 (0:00:00.145) 0:00:41.623 ***** 2026-02-28 00:44:26.311830 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.311847 | orchestrator | 2026-02-28 00:44:26.311865 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:44:26.311882 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.150) 0:00:41.774 ***** 2026-02-28 00:44:26.311900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.311917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.311935 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.311953 | orchestrator | 2026-02-28 00:44:26.311971 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:44:26.311989 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.401) 0:00:42.176 ***** 2026-02-28 00:44:26.312038 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312060 | orchestrator | 2026-02-28 00:44:26.312079 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:44:26.312104 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.152) 0:00:42.328 ***** 2026-02-28 00:44:26.312128 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.312178 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.312198 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312220 | orchestrator | 2026-02-28 00:44:26.312241 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:44:26.312286 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.159) 0:00:42.487 ***** 2026-02-28 00:44:26.312304 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:26.312317 | orchestrator | 2026-02-28 00:44:26.312330 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:44:26.312343 | orchestrator | Saturday 28 February 2026 00:44:21 +0000 (0:00:00.140) 0:00:42.628 ***** 2026-02-28 00:44:26.312355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.312366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.312377 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312388 | orchestrator | 2026-02-28 00:44:26.312399 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:44:26.312410 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.164) 0:00:42.792 ***** 2026-02-28 00:44:26.312421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.312432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.312443 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312454 | orchestrator | 2026-02-28 00:44:26.312466 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:44:26.312498 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.171) 0:00:42.964 ***** 2026-02-28 00:44:26.312511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:26.312522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:26.312533 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312543 | orchestrator | 2026-02-28 00:44:26.312554 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:44:26.312565 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.177) 0:00:43.141 ***** 2026-02-28 00:44:26.312576 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312587 | orchestrator | 2026-02-28 00:44:26.312598 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:44:26.312609 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.162) 0:00:43.304 ***** 2026-02-28 00:44:26.312620 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312631 | orchestrator | 2026-02-28 00:44:26.312642 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:44:26.312652 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.136) 0:00:43.441 ***** 2026-02-28 00:44:26.312663 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.312674 | orchestrator | 2026-02-28 00:44:26.312685 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:44:26.312696 | orchestrator | Saturday 28 February 2026 00:44:22 +0000 (0:00:00.139) 0:00:43.580 ***** 2026-02-28 00:44:26.312707 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:26.312718 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:44:26.312739 | orchestrator | } 2026-02-28 00:44:26.312751 | orchestrator | 2026-02-28 00:44:26.312761 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:44:26.312772 | orchestrator | Saturday 28 February 2026 00:44:23 +0000 (0:00:00.144) 0:00:43.725 ***** 2026-02-28 00:44:26.312783 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:26.312794 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:44:26.312805 | orchestrator | } 2026-02-28 00:44:26.312816 | orchestrator | 2026-02-28 00:44:26.312832 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:44:26.312843 | orchestrator | Saturday 28 February 2026 00:44:23 +0000 (0:00:00.134) 0:00:43.859 ***** 2026-02-28 00:44:26.312854 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:26.312865 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:44:26.312877 | orchestrator | } 2026-02-28 00:44:26.312888 | orchestrator | 2026-02-28 00:44:26.312899 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:44:26.312910 | orchestrator | Saturday 28 February 2026 00:44:23 +0000 (0:00:00.384) 0:00:44.244 ***** 2026-02-28 00:44:26.312921 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:26.312932 | orchestrator | 2026-02-28 00:44:26.312943 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:44:26.312954 | orchestrator | Saturday 28 February 2026 00:44:24 +0000 (0:00:00.519) 0:00:44.763 ***** 2026-02-28 00:44:26.312965 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:26.312976 | orchestrator | 2026-02-28 00:44:26.312987 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:44:26.312999 | orchestrator | Saturday 28 February 2026 00:44:24 +0000 (0:00:00.526) 0:00:45.289 ***** 2026-02-28 00:44:26.313032 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:26.313043 | orchestrator | 2026-02-28 00:44:26.313055 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:44:26.313066 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.496) 0:00:45.785 ***** 2026-02-28 00:44:26.313077 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:26.313088 | orchestrator | 2026-02-28 00:44:26.313099 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:44:26.313110 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.162) 0:00:45.948 ***** 2026-02-28 00:44:26.313121 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.313132 | orchestrator | 2026-02-28 00:44:26.313143 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:44:26.313154 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.126) 0:00:46.075 ***** 2026-02-28 00:44:26.313165 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.313176 | orchestrator | 2026-02-28 00:44:26.313188 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:44:26.313199 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.106) 0:00:46.181 ***** 2026-02-28 00:44:26.313210 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:26.313221 | orchestrator |  "vgs_report": { 2026-02-28 00:44:26.313233 | orchestrator |  "vg": [] 2026-02-28 00:44:26.313245 | orchestrator |  } 2026-02-28 00:44:26.313257 | orchestrator | } 2026-02-28 00:44:26.313268 | orchestrator | 2026-02-28 00:44:26.313279 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:44:26.313291 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.162) 0:00:46.343 ***** 2026-02-28 00:44:26.313302 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.313312 | orchestrator | 2026-02-28 00:44:26.313324 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:44:26.313335 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.153) 0:00:46.497 ***** 2026-02-28 00:44:26.313346 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.313356 | orchestrator | 2026-02-28 00:44:26.313368 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:44:26.313399 | orchestrator | Saturday 28 February 2026 00:44:25 +0000 (0:00:00.144) 0:00:46.641 ***** 2026-02-28 00:44:26.313410 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.313421 | orchestrator | 2026-02-28 00:44:26.313432 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:44:26.313443 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.157) 0:00:46.798 ***** 2026-02-28 00:44:26.313455 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:26.313466 | orchestrator | 2026-02-28 00:44:26.313484 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:44:31.043518 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.163) 0:00:46.962 ***** 2026-02-28 00:44:31.043631 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043647 | orchestrator | 2026-02-28 00:44:31.043659 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:44:31.043671 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.376) 0:00:47.338 ***** 2026-02-28 00:44:31.043682 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043693 | orchestrator | 2026-02-28 00:44:31.043704 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:44:31.043715 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.152) 0:00:47.490 ***** 2026-02-28 00:44:31.043726 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043736 | orchestrator | 2026-02-28 00:44:31.043748 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:44:31.043758 | orchestrator | Saturday 28 February 2026 00:44:26 +0000 (0:00:00.137) 0:00:47.627 ***** 2026-02-28 00:44:31.043769 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043780 | orchestrator | 2026-02-28 00:44:31.043791 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:44:31.043802 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.169) 0:00:47.797 ***** 2026-02-28 00:44:31.043813 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043824 | orchestrator | 2026-02-28 00:44:31.043835 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:44:31.043846 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.125) 0:00:47.922 ***** 2026-02-28 00:44:31.043857 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043868 | orchestrator | 2026-02-28 00:44:31.043878 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:44:31.043889 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.172) 0:00:48.095 ***** 2026-02-28 00:44:31.043900 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043911 | orchestrator | 2026-02-28 00:44:31.043922 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:44:31.043933 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.157) 0:00:48.252 ***** 2026-02-28 00:44:31.043961 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.043972 | orchestrator | 2026-02-28 00:44:31.043983 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:44:31.043994 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.144) 0:00:48.396 ***** 2026-02-28 00:44:31.044032 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044044 | orchestrator | 2026-02-28 00:44:31.044057 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:44:31.044070 | orchestrator | Saturday 28 February 2026 00:44:27 +0000 (0:00:00.140) 0:00:48.537 ***** 2026-02-28 00:44:31.044082 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044095 | orchestrator | 2026-02-28 00:44:31.044107 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:44:31.044120 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.143) 0:00:48.681 ***** 2026-02-28 00:44:31.044134 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044188 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044200 | orchestrator | 2026-02-28 00:44:31.044213 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:44:31.044226 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.162) 0:00:48.843 ***** 2026-02-28 00:44:31.044237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044259 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044270 | orchestrator | 2026-02-28 00:44:31.044281 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:44:31.044292 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.165) 0:00:49.009 ***** 2026-02-28 00:44:31.044303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044325 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044336 | orchestrator | 2026-02-28 00:44:31.044346 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:44:31.044357 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.355) 0:00:49.364 ***** 2026-02-28 00:44:31.044369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044391 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044402 | orchestrator | 2026-02-28 00:44:31.044430 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:44:31.044442 | orchestrator | Saturday 28 February 2026 00:44:28 +0000 (0:00:00.152) 0:00:49.517 ***** 2026-02-28 00:44:31.044453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044476 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044487 | orchestrator | 2026-02-28 00:44:31.044498 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:44:31.044509 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.158) 0:00:49.675 ***** 2026-02-28 00:44:31.044520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044542 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044553 | orchestrator | 2026-02-28 00:44:31.044564 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:44:31.044575 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.161) 0:00:49.836 ***** 2026-02-28 00:44:31.044586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044616 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044627 | orchestrator | 2026-02-28 00:44:31.044638 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:44:31.044649 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.151) 0:00:49.987 ***** 2026-02-28 00:44:31.044660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044683 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044693 | orchestrator | 2026-02-28 00:44:31.044704 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:44:31.044716 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.151) 0:00:50.139 ***** 2026-02-28 00:44:31.044726 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:31.044737 | orchestrator | 2026-02-28 00:44:31.044748 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:44:31.044759 | orchestrator | Saturday 28 February 2026 00:44:29 +0000 (0:00:00.474) 0:00:50.613 ***** 2026-02-28 00:44:31.044770 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:31.044781 | orchestrator | 2026-02-28 00:44:31.044792 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:44:31.044803 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.520) 0:00:51.134 ***** 2026-02-28 00:44:31.044814 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:44:31.044825 | orchestrator | 2026-02-28 00:44:31.044835 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:44:31.044846 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.173) 0:00:51.308 ***** 2026-02-28 00:44:31.044857 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'vg_name': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'}) 2026-02-28 00:44:31.044869 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'vg_name': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}) 2026-02-28 00:44:31.044880 | orchestrator | 2026-02-28 00:44:31.044891 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:44:31.044902 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.160) 0:00:51.469 ***** 2026-02-28 00:44:31.044913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044924 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:31.044935 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:31.044946 | orchestrator | 2026-02-28 00:44:31.044957 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:44:31.044968 | orchestrator | Saturday 28 February 2026 00:44:30 +0000 (0:00:00.156) 0:00:51.626 ***** 2026-02-28 00:44:31.044979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:31.044996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:37.084825 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:37.084924 | orchestrator | 2026-02-28 00:44:37.084934 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:44:37.084942 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.140) 0:00:51.766 ***** 2026-02-28 00:44:37.084948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'})  2026-02-28 00:44:37.084956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'})  2026-02-28 00:44:37.084962 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:44:37.084968 | orchestrator | 2026-02-28 00:44:37.084974 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:44:37.084980 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.136) 0:00:51.903 ***** 2026-02-28 00:44:37.084986 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 00:44:37.084992 | orchestrator |  "lvm_report": { 2026-02-28 00:44:37.084999 | orchestrator |  "lv": [ 2026-02-28 00:44:37.085042 | orchestrator |  { 2026-02-28 00:44:37.085048 | orchestrator |  "lv_name": "osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00", 2026-02-28 00:44:37.085055 | orchestrator |  "vg_name": "ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00" 2026-02-28 00:44:37.085061 | orchestrator |  }, 2026-02-28 00:44:37.085067 | orchestrator |  { 2026-02-28 00:44:37.085072 | orchestrator |  "lv_name": "osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e", 2026-02-28 00:44:37.085078 | orchestrator |  "vg_name": "ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e" 2026-02-28 00:44:37.085084 | orchestrator |  } 2026-02-28 00:44:37.085090 | orchestrator |  ], 2026-02-28 00:44:37.085096 | orchestrator |  "pv": [ 2026-02-28 00:44:37.085102 | orchestrator |  { 2026-02-28 00:44:37.085108 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:44:37.085117 | orchestrator |  "vg_name": "ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e" 2026-02-28 00:44:37.085123 | orchestrator |  }, 2026-02-28 00:44:37.085129 | orchestrator |  { 2026-02-28 00:44:37.085135 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:44:37.085141 | orchestrator |  "vg_name": "ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00" 2026-02-28 00:44:37.085147 | orchestrator |  } 2026-02-28 00:44:37.085153 | orchestrator |  ] 2026-02-28 00:44:37.085158 | orchestrator |  } 2026-02-28 00:44:37.085165 | orchestrator | } 2026-02-28 00:44:37.085171 | orchestrator | 2026-02-28 00:44:37.085177 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-28 00:44:37.085183 | orchestrator | 2026-02-28 00:44:37.085189 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-28 00:44:37.085195 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.478) 0:00:52.381 ***** 2026-02-28 00:44:37.085201 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-28 00:44:37.085207 | orchestrator | 2026-02-28 00:44:37.085213 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-28 00:44:37.085219 | orchestrator | Saturday 28 February 2026 00:44:31 +0000 (0:00:00.248) 0:00:52.629 ***** 2026-02-28 00:44:37.085225 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:37.085231 | orchestrator | 2026-02-28 00:44:37.085237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085242 | orchestrator | Saturday 28 February 2026 00:44:32 +0000 (0:00:00.236) 0:00:52.865 ***** 2026-02-28 00:44:37.085248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:44:37.085254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:44:37.085260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:44:37.085266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:44:37.085279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:44:37.085285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:44:37.085291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:44:37.085296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:44:37.085302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-28 00:44:37.085311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:44:37.085317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:44:37.085323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:44:37.085329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:44:37.085335 | orchestrator | 2026-02-28 00:44:37.085342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085348 | orchestrator | Saturday 28 February 2026 00:44:32 +0000 (0:00:00.435) 0:00:53.301 ***** 2026-02-28 00:44:37.085355 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085361 | orchestrator | 2026-02-28 00:44:37.085368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085375 | orchestrator | Saturday 28 February 2026 00:44:32 +0000 (0:00:00.198) 0:00:53.499 ***** 2026-02-28 00:44:37.085382 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085388 | orchestrator | 2026-02-28 00:44:37.085395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085413 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.214) 0:00:53.714 ***** 2026-02-28 00:44:37.085420 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085426 | orchestrator | 2026-02-28 00:44:37.085433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085439 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.196) 0:00:53.911 ***** 2026-02-28 00:44:37.085446 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085453 | orchestrator | 2026-02-28 00:44:37.085460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085466 | orchestrator | Saturday 28 February 2026 00:44:33 +0000 (0:00:00.204) 0:00:54.115 ***** 2026-02-28 00:44:37.085472 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085478 | orchestrator | 2026-02-28 00:44:37.085484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085490 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.631) 0:00:54.747 ***** 2026-02-28 00:44:37.085496 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085501 | orchestrator | 2026-02-28 00:44:37.085507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085513 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.207) 0:00:54.955 ***** 2026-02-28 00:44:37.085519 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085525 | orchestrator | 2026-02-28 00:44:37.085531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085537 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.220) 0:00:55.176 ***** 2026-02-28 00:44:37.085543 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:37.085548 | orchestrator | 2026-02-28 00:44:37.085554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085560 | orchestrator | Saturday 28 February 2026 00:44:34 +0000 (0:00:00.197) 0:00:55.373 ***** 2026-02-28 00:44:37.085566 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462) 2026-02-28 00:44:37.085577 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462) 2026-02-28 00:44:37.085587 | orchestrator | 2026-02-28 00:44:37.085592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085598 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.412) 0:00:55.786 ***** 2026-02-28 00:44:37.085604 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660) 2026-02-28 00:44:37.085610 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660) 2026-02-28 00:44:37.085616 | orchestrator | 2026-02-28 00:44:37.085622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085627 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.425) 0:00:56.211 ***** 2026-02-28 00:44:37.085633 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14) 2026-02-28 00:44:37.085639 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14) 2026-02-28 00:44:37.085645 | orchestrator | 2026-02-28 00:44:37.085651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085657 | orchestrator | Saturday 28 February 2026 00:44:35 +0000 (0:00:00.432) 0:00:56.644 ***** 2026-02-28 00:44:37.085662 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0) 2026-02-28 00:44:37.085668 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0) 2026-02-28 00:44:37.085674 | orchestrator | 2026-02-28 00:44:37.085680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-28 00:44:37.085686 | orchestrator | Saturday 28 February 2026 00:44:36 +0000 (0:00:00.430) 0:00:57.074 ***** 2026-02-28 00:44:37.085692 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-28 00:44:37.085698 | orchestrator | 2026-02-28 00:44:37.085704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:37.085709 | orchestrator | Saturday 28 February 2026 00:44:36 +0000 (0:00:00.323) 0:00:57.398 ***** 2026-02-28 00:44:37.085715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-28 00:44:37.085721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-28 00:44:37.085727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-28 00:44:37.085733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-28 00:44:37.085739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-28 00:44:37.085744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-28 00:44:37.085750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-28 00:44:37.085756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-28 00:44:37.085762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-28 00:44:37.085768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-28 00:44:37.085783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-28 00:44:37.085799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-28 00:44:46.191272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-28 00:44:46.191395 | orchestrator | 2026-02-28 00:44:46.191419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191435 | orchestrator | Saturday 28 February 2026 00:44:37 +0000 (0:00:00.417) 0:00:57.815 ***** 2026-02-28 00:44:46.191484 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191502 | orchestrator | 2026-02-28 00:44:46.191516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191531 | orchestrator | Saturday 28 February 2026 00:44:37 +0000 (0:00:00.198) 0:00:58.014 ***** 2026-02-28 00:44:46.191545 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191560 | orchestrator | 2026-02-28 00:44:46.191575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191591 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.678) 0:00:58.693 ***** 2026-02-28 00:44:46.191607 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191623 | orchestrator | 2026-02-28 00:44:46.191639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191654 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.238) 0:00:58.931 ***** 2026-02-28 00:44:46.191670 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191686 | orchestrator | 2026-02-28 00:44:46.191703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191719 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.202) 0:00:59.134 ***** 2026-02-28 00:44:46.191735 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191750 | orchestrator | 2026-02-28 00:44:46.191766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191781 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.202) 0:00:59.336 ***** 2026-02-28 00:44:46.191797 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191813 | orchestrator | 2026-02-28 00:44:46.191845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191861 | orchestrator | Saturday 28 February 2026 00:44:38 +0000 (0:00:00.199) 0:00:59.536 ***** 2026-02-28 00:44:46.191876 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191891 | orchestrator | 2026-02-28 00:44:46.191907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191924 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.216) 0:00:59.752 ***** 2026-02-28 00:44:46.191940 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.191955 | orchestrator | 2026-02-28 00:44:46.191970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.191985 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.206) 0:00:59.958 ***** 2026-02-28 00:44:46.192027 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-28 00:44:46.192046 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-28 00:44:46.192062 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-28 00:44:46.192078 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-28 00:44:46.192093 | orchestrator | 2026-02-28 00:44:46.192108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.192122 | orchestrator | Saturday 28 February 2026 00:44:39 +0000 (0:00:00.671) 0:01:00.630 ***** 2026-02-28 00:44:46.192136 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192150 | orchestrator | 2026-02-28 00:44:46.192164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.192179 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.202) 0:01:00.832 ***** 2026-02-28 00:44:46.192195 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192209 | orchestrator | 2026-02-28 00:44:46.192225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.192239 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.211) 0:01:01.043 ***** 2026-02-28 00:44:46.192254 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192269 | orchestrator | 2026-02-28 00:44:46.192283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-28 00:44:46.192298 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.215) 0:01:01.258 ***** 2026-02-28 00:44:46.192327 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192342 | orchestrator | 2026-02-28 00:44:46.192357 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-28 00:44:46.192373 | orchestrator | Saturday 28 February 2026 00:44:40 +0000 (0:00:00.222) 0:01:01.481 ***** 2026-02-28 00:44:46.192387 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192401 | orchestrator | 2026-02-28 00:44:46.192416 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-28 00:44:46.192431 | orchestrator | Saturday 28 February 2026 00:44:41 +0000 (0:00:00.277) 0:01:01.759 ***** 2026-02-28 00:44:46.192445 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e9a8b5b-9130-5945-a817-2135e2f57de8'}}) 2026-02-28 00:44:46.192461 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '160cc444-1ede-5c9f-8076-16a146e97f10'}}) 2026-02-28 00:44:46.192474 | orchestrator | 2026-02-28 00:44:46.192489 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-28 00:44:46.192504 | orchestrator | Saturday 28 February 2026 00:44:41 +0000 (0:00:00.187) 0:01:01.946 ***** 2026-02-28 00:44:46.192520 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'}) 2026-02-28 00:44:46.192536 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'}) 2026-02-28 00:44:46.192550 | orchestrator | 2026-02-28 00:44:46.192564 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-28 00:44:46.192602 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:01.883) 0:01:03.829 ***** 2026-02-28 00:44:46.192621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:46.192637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:46.192652 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192666 | orchestrator | 2026-02-28 00:44:46.192681 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-28 00:44:46.192697 | orchestrator | Saturday 28 February 2026 00:44:43 +0000 (0:00:00.184) 0:01:04.013 ***** 2026-02-28 00:44:46.192711 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'}) 2026-02-28 00:44:46.192726 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'}) 2026-02-28 00:44:46.192741 | orchestrator | 2026-02-28 00:44:46.192756 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-28 00:44:46.192771 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:01.380) 0:01:05.394 ***** 2026-02-28 00:44:46.192786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:46.192801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:46.192816 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192832 | orchestrator | 2026-02-28 00:44:46.192846 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-28 00:44:46.192861 | orchestrator | Saturday 28 February 2026 00:44:44 +0000 (0:00:00.200) 0:01:05.595 ***** 2026-02-28 00:44:46.192876 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192891 | orchestrator | 2026-02-28 00:44:46.192906 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-28 00:44:46.192922 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.137) 0:01:05.732 ***** 2026-02-28 00:44:46.192948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:46.192963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:46.192978 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.192994 | orchestrator | 2026-02-28 00:44:46.193034 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-28 00:44:46.193049 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.155) 0:01:05.888 ***** 2026-02-28 00:44:46.193065 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.193080 | orchestrator | 2026-02-28 00:44:46.193096 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-28 00:44:46.193111 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.145) 0:01:06.033 ***** 2026-02-28 00:44:46.193125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:46.193139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:46.193155 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.193170 | orchestrator | 2026-02-28 00:44:46.193187 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-28 00:44:46.193216 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.140) 0:01:06.174 ***** 2026-02-28 00:44:46.193233 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.193248 | orchestrator | 2026-02-28 00:44:46.193265 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-28 00:44:46.193281 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.138) 0:01:06.312 ***** 2026-02-28 00:44:46.193297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:46.193313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:46.193328 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:46.193341 | orchestrator | 2026-02-28 00:44:46.193353 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-28 00:44:46.193366 | orchestrator | Saturday 28 February 2026 00:44:45 +0000 (0:00:00.148) 0:01:06.461 ***** 2026-02-28 00:44:46.193381 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:46.193395 | orchestrator | 2026-02-28 00:44:46.193410 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-28 00:44:46.193426 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.311) 0:01:06.773 ***** 2026-02-28 00:44:46.193455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:52.281052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:52.281159 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281175 | orchestrator | 2026-02-28 00:44:52.281187 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-28 00:44:52.281198 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.158) 0:01:06.931 ***** 2026-02-28 00:44:52.281208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:52.281219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:52.281250 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281260 | orchestrator | 2026-02-28 00:44:52.281270 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-28 00:44:52.281280 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.152) 0:01:07.084 ***** 2026-02-28 00:44:52.281290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:52.281300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:52.281310 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281320 | orchestrator | 2026-02-28 00:44:52.281329 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-28 00:44:52.281354 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.155) 0:01:07.240 ***** 2026-02-28 00:44:52.281364 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281374 | orchestrator | 2026-02-28 00:44:52.281383 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-28 00:44:52.281393 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.135) 0:01:07.375 ***** 2026-02-28 00:44:52.281402 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281412 | orchestrator | 2026-02-28 00:44:52.281421 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-28 00:44:52.281431 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.128) 0:01:07.504 ***** 2026-02-28 00:44:52.281440 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281449 | orchestrator | 2026-02-28 00:44:52.281459 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-28 00:44:52.281469 | orchestrator | Saturday 28 February 2026 00:44:46 +0000 (0:00:00.131) 0:01:07.636 ***** 2026-02-28 00:44:52.281478 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:44:52.281488 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-28 00:44:52.281498 | orchestrator | } 2026-02-28 00:44:52.281508 | orchestrator | 2026-02-28 00:44:52.281518 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-28 00:44:52.281527 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.154) 0:01:07.790 ***** 2026-02-28 00:44:52.281536 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:44:52.281546 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-28 00:44:52.281558 | orchestrator | } 2026-02-28 00:44:52.281569 | orchestrator | 2026-02-28 00:44:52.281580 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-28 00:44:52.281591 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.158) 0:01:07.949 ***** 2026-02-28 00:44:52.281601 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:44:52.281612 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-28 00:44:52.281623 | orchestrator | } 2026-02-28 00:44:52.281634 | orchestrator | 2026-02-28 00:44:52.281645 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-28 00:44:52.281656 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.147) 0:01:08.096 ***** 2026-02-28 00:44:52.281666 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:52.281676 | orchestrator | 2026-02-28 00:44:52.281685 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-28 00:44:52.281708 | orchestrator | Saturday 28 February 2026 00:44:47 +0000 (0:00:00.522) 0:01:08.619 ***** 2026-02-28 00:44:52.281718 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:52.281737 | orchestrator | 2026-02-28 00:44:52.281747 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-28 00:44:52.281757 | orchestrator | Saturday 28 February 2026 00:44:48 +0000 (0:00:00.511) 0:01:09.131 ***** 2026-02-28 00:44:52.281766 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:52.281784 | orchestrator | 2026-02-28 00:44:52.281793 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-28 00:44:52.281803 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.731) 0:01:09.863 ***** 2026-02-28 00:44:52.281813 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:52.281822 | orchestrator | 2026-02-28 00:44:52.281832 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-28 00:44:52.281841 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.142) 0:01:10.006 ***** 2026-02-28 00:44:52.281850 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281860 | orchestrator | 2026-02-28 00:44:52.281869 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-28 00:44:52.281879 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.122) 0:01:10.128 ***** 2026-02-28 00:44:52.281889 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.281898 | orchestrator | 2026-02-28 00:44:52.281908 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-28 00:44:52.281917 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.116) 0:01:10.245 ***** 2026-02-28 00:44:52.281927 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:44:52.281936 | orchestrator |  "vgs_report": { 2026-02-28 00:44:52.281947 | orchestrator |  "vg": [] 2026-02-28 00:44:52.281973 | orchestrator |  } 2026-02-28 00:44:52.281983 | orchestrator | } 2026-02-28 00:44:52.281993 | orchestrator | 2026-02-28 00:44:52.282071 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-28 00:44:52.282082 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.134) 0:01:10.379 ***** 2026-02-28 00:44:52.282092 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282101 | orchestrator | 2026-02-28 00:44:52.282111 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-28 00:44:52.282121 | orchestrator | Saturday 28 February 2026 00:44:49 +0000 (0:00:00.141) 0:01:10.521 ***** 2026-02-28 00:44:52.282130 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282140 | orchestrator | 2026-02-28 00:44:52.282149 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-28 00:44:52.282159 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.136) 0:01:10.658 ***** 2026-02-28 00:44:52.282169 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282178 | orchestrator | 2026-02-28 00:44:52.282188 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-28 00:44:52.282197 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.154) 0:01:10.812 ***** 2026-02-28 00:44:52.282207 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282216 | orchestrator | 2026-02-28 00:44:52.282226 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-28 00:44:52.282235 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.127) 0:01:10.939 ***** 2026-02-28 00:44:52.282245 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282254 | orchestrator | 2026-02-28 00:44:52.282264 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-28 00:44:52.282273 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.140) 0:01:11.080 ***** 2026-02-28 00:44:52.282283 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282292 | orchestrator | 2026-02-28 00:44:52.282302 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-28 00:44:52.282318 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.152) 0:01:11.232 ***** 2026-02-28 00:44:52.282328 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282337 | orchestrator | 2026-02-28 00:44:52.282347 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-28 00:44:52.282356 | orchestrator | Saturday 28 February 2026 00:44:50 +0000 (0:00:00.150) 0:01:11.383 ***** 2026-02-28 00:44:52.282366 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282375 | orchestrator | 2026-02-28 00:44:52.282385 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-28 00:44:52.282401 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.351) 0:01:11.735 ***** 2026-02-28 00:44:52.282411 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282420 | orchestrator | 2026-02-28 00:44:52.282430 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-28 00:44:52.282440 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.143) 0:01:11.878 ***** 2026-02-28 00:44:52.282449 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282459 | orchestrator | 2026-02-28 00:44:52.282468 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-28 00:44:52.282478 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.125) 0:01:12.004 ***** 2026-02-28 00:44:52.282487 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282497 | orchestrator | 2026-02-28 00:44:52.282506 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-28 00:44:52.282516 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.141) 0:01:12.146 ***** 2026-02-28 00:44:52.282526 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282535 | orchestrator | 2026-02-28 00:44:52.282545 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-28 00:44:52.282554 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.146) 0:01:12.292 ***** 2026-02-28 00:44:52.282563 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282573 | orchestrator | 2026-02-28 00:44:52.282582 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-28 00:44:52.282595 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.133) 0:01:12.425 ***** 2026-02-28 00:44:52.282612 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282628 | orchestrator | 2026-02-28 00:44:52.282644 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-28 00:44:52.282660 | orchestrator | Saturday 28 February 2026 00:44:51 +0000 (0:00:00.127) 0:01:12.553 ***** 2026-02-28 00:44:52.282676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:52.282693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:52.282709 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282726 | orchestrator | 2026-02-28 00:44:52.282743 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-28 00:44:52.282759 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.154) 0:01:12.707 ***** 2026-02-28 00:44:52.282774 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:52.282784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:52.282794 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:52.282803 | orchestrator | 2026-02-28 00:44:52.282813 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-28 00:44:52.282822 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.169) 0:01:12.877 ***** 2026-02-28 00:44:52.282842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.406138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.406245 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.406262 | orchestrator | 2026-02-28 00:44:55.406276 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-28 00:44:55.406289 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.138) 0:01:13.016 ***** 2026-02-28 00:44:55.406325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.406338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.406350 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.406361 | orchestrator | 2026-02-28 00:44:55.406373 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-28 00:44:55.406385 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.149) 0:01:13.165 ***** 2026-02-28 00:44:55.406396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.406434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.406446 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.406458 | orchestrator | 2026-02-28 00:44:55.406469 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-28 00:44:55.406480 | orchestrator | Saturday 28 February 2026 00:44:52 +0000 (0:00:00.144) 0:01:13.309 ***** 2026-02-28 00:44:55.406491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.406502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.406514 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.406525 | orchestrator | 2026-02-28 00:44:55.406536 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-28 00:44:55.406548 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.359) 0:01:13.669 ***** 2026-02-28 00:44:55.406559 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.406571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.406584 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.406597 | orchestrator | 2026-02-28 00:44:55.406609 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-28 00:44:55.406622 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.158) 0:01:13.827 ***** 2026-02-28 00:44:55.406634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.406647 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.406667 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.406687 | orchestrator | 2026-02-28 00:44:55.406705 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-28 00:44:55.406724 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.188) 0:01:14.016 ***** 2026-02-28 00:44:55.406742 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:55.406762 | orchestrator | 2026-02-28 00:44:55.406780 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-28 00:44:55.406801 | orchestrator | Saturday 28 February 2026 00:44:53 +0000 (0:00:00.502) 0:01:14.519 ***** 2026-02-28 00:44:55.406821 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:55.406841 | orchestrator | 2026-02-28 00:44:55.406862 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-28 00:44:55.406895 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.559) 0:01:15.078 ***** 2026-02-28 00:44:55.406914 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:44:55.406934 | orchestrator | 2026-02-28 00:44:55.406953 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-28 00:44:55.406971 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.139) 0:01:15.218 ***** 2026-02-28 00:44:55.406989 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'vg_name': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'}) 2026-02-28 00:44:55.407041 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'vg_name': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'}) 2026-02-28 00:44:55.407059 | orchestrator | 2026-02-28 00:44:55.407077 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-28 00:44:55.407096 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.177) 0:01:15.395 ***** 2026-02-28 00:44:55.407138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.407157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.407175 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.407193 | orchestrator | 2026-02-28 00:44:55.407211 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-28 00:44:55.407229 | orchestrator | Saturday 28 February 2026 00:44:54 +0000 (0:00:00.167) 0:01:15.563 ***** 2026-02-28 00:44:55.407247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.407266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.407284 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.407302 | orchestrator | 2026-02-28 00:44:55.407320 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-28 00:44:55.407338 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.152) 0:01:15.715 ***** 2026-02-28 00:44:55.407356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'})  2026-02-28 00:44:55.407375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'})  2026-02-28 00:44:55.407393 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:44:55.407411 | orchestrator | 2026-02-28 00:44:55.407429 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-28 00:44:55.407448 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.181) 0:01:15.897 ***** 2026-02-28 00:44:55.407466 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 00:44:55.407484 | orchestrator |  "lvm_report": { 2026-02-28 00:44:55.407503 | orchestrator |  "lv": [ 2026-02-28 00:44:55.407522 | orchestrator |  { 2026-02-28 00:44:55.407539 | orchestrator |  "lv_name": "osd-block-160cc444-1ede-5c9f-8076-16a146e97f10", 2026-02-28 00:44:55.407559 | orchestrator |  "vg_name": "ceph-160cc444-1ede-5c9f-8076-16a146e97f10" 2026-02-28 00:44:55.407578 | orchestrator |  }, 2026-02-28 00:44:55.407596 | orchestrator |  { 2026-02-28 00:44:55.407613 | orchestrator |  "lv_name": "osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8", 2026-02-28 00:44:55.407634 | orchestrator |  "vg_name": "ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8" 2026-02-28 00:44:55.407653 | orchestrator |  } 2026-02-28 00:44:55.407673 | orchestrator |  ], 2026-02-28 00:44:55.407692 | orchestrator |  "pv": [ 2026-02-28 00:44:55.407737 | orchestrator |  { 2026-02-28 00:44:55.407758 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-28 00:44:55.407776 | orchestrator |  "vg_name": "ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8" 2026-02-28 00:44:55.407795 | orchestrator |  }, 2026-02-28 00:44:55.407808 | orchestrator |  { 2026-02-28 00:44:55.407820 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-28 00:44:55.407831 | orchestrator |  "vg_name": "ceph-160cc444-1ede-5c9f-8076-16a146e97f10" 2026-02-28 00:44:55.407842 | orchestrator |  } 2026-02-28 00:44:55.407853 | orchestrator |  ] 2026-02-28 00:44:55.407863 | orchestrator |  } 2026-02-28 00:44:55.407875 | orchestrator | } 2026-02-28 00:44:55.407886 | orchestrator | 2026-02-28 00:44:55.407897 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:44:55.407908 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:44:55.407919 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:44:55.407930 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-28 00:44:55.407941 | orchestrator | 2026-02-28 00:44:55.407952 | orchestrator | 2026-02-28 00:44:55.407963 | orchestrator | 2026-02-28 00:44:55.407974 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:44:55.407985 | orchestrator | Saturday 28 February 2026 00:44:55 +0000 (0:00:00.148) 0:01:16.045 ***** 2026-02-28 00:44:55.408020 | orchestrator | =============================================================================== 2026-02-28 00:44:55.408031 | orchestrator | Create block VGs -------------------------------------------------------- 5.65s 2026-02-28 00:44:55.408042 | orchestrator | Create block LVs -------------------------------------------------------- 4.21s 2026-02-28 00:44:55.408053 | orchestrator | Add known partitions to the list of available block devices ------------- 1.80s 2026-02-28 00:44:55.408064 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.72s 2026-02-28 00:44:55.408086 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-02-28 00:44:55.408097 | orchestrator | Add known links to the list of available block devices ------------------ 1.65s 2026-02-28 00:44:55.408108 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-02-28 00:44:55.408119 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-02-28 00:44:55.408140 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.49s 2026-02-28 00:44:55.808535 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-02-28 00:44:55.808627 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2026-02-28 00:44:55.808640 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2026-02-28 00:44:55.808650 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-02-28 00:44:55.808661 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-02-28 00:44:55.808671 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-02-28 00:44:55.808680 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.79s 2026-02-28 00:44:55.808690 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.73s 2026-02-28 00:44:55.808700 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-02-28 00:44:55.808710 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-02-28 00:44:55.808720 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2026-02-28 00:45:08.225686 | orchestrator | 2026-02-28 00:45:08 | INFO  | Prepare task for execution of facts. 2026-02-28 00:45:08.305647 | orchestrator | 2026-02-28 00:45:08 | INFO  | Task 9d939c52-18b5-4fdf-a5ea-b4c2c55dcab3 (facts) was prepared for execution. 2026-02-28 00:45:08.305767 | orchestrator | 2026-02-28 00:45:08 | INFO  | It takes a moment until task 9d939c52-18b5-4fdf-a5ea-b4c2c55dcab3 (facts) has been started and output is visible here. 2026-02-28 00:45:21.607723 | orchestrator | 2026-02-28 00:45:21.607837 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-28 00:45:21.607851 | orchestrator | 2026-02-28 00:45:21.607860 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-28 00:45:21.607869 | orchestrator | Saturday 28 February 2026 00:45:12 +0000 (0:00:00.271) 0:00:00.271 ***** 2026-02-28 00:45:21.607878 | orchestrator | ok: [testbed-manager] 2026-02-28 00:45:21.607888 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:45:21.607896 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:45:21.607905 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:45:21.607913 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:45:21.607922 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:21.607930 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:21.607938 | orchestrator | 2026-02-28 00:45:21.607947 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-28 00:45:21.607955 | orchestrator | Saturday 28 February 2026 00:45:13 +0000 (0:00:01.162) 0:00:01.433 ***** 2026-02-28 00:45:21.607964 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:45:21.607973 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:45:21.607981 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:45:21.608021 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:45:21.608032 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:45:21.608040 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:21.608048 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:21.608056 | orchestrator | 2026-02-28 00:45:21.608064 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-28 00:45:21.608072 | orchestrator | 2026-02-28 00:45:21.608080 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-28 00:45:21.608088 | orchestrator | Saturday 28 February 2026 00:45:15 +0000 (0:00:01.280) 0:00:02.714 ***** 2026-02-28 00:45:21.608096 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:45:21.608104 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:45:21.608112 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:45:21.608120 | orchestrator | ok: [testbed-manager] 2026-02-28 00:45:21.608128 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:45:21.608136 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:45:21.608144 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:45:21.608152 | orchestrator | 2026-02-28 00:45:21.608160 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-28 00:45:21.608168 | orchestrator | 2026-02-28 00:45:21.608176 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-28 00:45:21.608184 | orchestrator | Saturday 28 February 2026 00:45:20 +0000 (0:00:05.242) 0:00:07.958 ***** 2026-02-28 00:45:21.608192 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:45:21.608200 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:45:21.608208 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:45:21.608216 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:45:21.608224 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:45:21.608232 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:45:21.608240 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:45:21.608248 | orchestrator | 2026-02-28 00:45:21.608256 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:45:21.608265 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608276 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608309 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608319 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608328 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608338 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608347 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:45:21.608356 | orchestrator | 2026-02-28 00:45:21.608365 | orchestrator | 2026-02-28 00:45:21.608374 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:45:21.608383 | orchestrator | Saturday 28 February 2026 00:45:20 +0000 (0:00:00.632) 0:00:08.591 ***** 2026-02-28 00:45:21.608393 | orchestrator | =============================================================================== 2026-02-28 00:45:21.608402 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.24s 2026-02-28 00:45:21.608411 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-02-28 00:45:21.608420 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2026-02-28 00:45:21.608429 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-28 00:45:35.549660 | orchestrator | 2026-02-28 00:45:35 | INFO  | Prepare task for execution of frr. 2026-02-28 00:45:35.620357 | orchestrator | 2026-02-28 00:45:35 | INFO  | Task 048d8b91-8de4-4613-acb5-0668eda05365 (frr) was prepared for execution. 2026-02-28 00:45:35.620458 | orchestrator | 2026-02-28 00:45:35 | INFO  | It takes a moment until task 048d8b91-8de4-4613-acb5-0668eda05365 (frr) has been started and output is visible here. 2026-02-28 00:46:05.341780 | orchestrator | 2026-02-28 00:46:05.341916 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-28 00:46:05.341942 | orchestrator | 2026-02-28 00:46:05.341970 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-28 00:46:05.342127 | orchestrator | Saturday 28 February 2026 00:45:40 +0000 (0:00:00.291) 0:00:00.291 ***** 2026-02-28 00:46:05.342147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:46:05.342168 | orchestrator | 2026-02-28 00:46:05.342188 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-28 00:46:05.342207 | orchestrator | Saturday 28 February 2026 00:45:41 +0000 (0:00:00.260) 0:00:00.551 ***** 2026-02-28 00:46:05.342226 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:05.342239 | orchestrator | 2026-02-28 00:46:05.342250 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-28 00:46:05.342262 | orchestrator | Saturday 28 February 2026 00:45:42 +0000 (0:00:01.383) 0:00:01.935 ***** 2026-02-28 00:46:05.342273 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:05.342284 | orchestrator | 2026-02-28 00:46:05.342295 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-28 00:46:05.342307 | orchestrator | Saturday 28 February 2026 00:45:53 +0000 (0:00:11.233) 0:00:13.169 ***** 2026-02-28 00:46:05.342318 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:05.342332 | orchestrator | 2026-02-28 00:46:05.342345 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-28 00:46:05.342358 | orchestrator | Saturday 28 February 2026 00:45:54 +0000 (0:00:01.079) 0:00:14.248 ***** 2026-02-28 00:46:05.342372 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:05.342407 | orchestrator | 2026-02-28 00:46:05.342420 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-28 00:46:05.342433 | orchestrator | Saturday 28 February 2026 00:45:55 +0000 (0:00:01.020) 0:00:15.269 ***** 2026-02-28 00:46:05.342447 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:05.342459 | orchestrator | 2026-02-28 00:46:05.342472 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-02-28 00:46:05.342485 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:01.286) 0:00:16.556 ***** 2026-02-28 00:46:05.342498 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:05.342511 | orchestrator | 2026-02-28 00:46:05.342524 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-02-28 00:46:05.342537 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.171) 0:00:16.728 ***** 2026-02-28 00:46:05.342550 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:05.342562 | orchestrator | 2026-02-28 00:46:05.342575 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-02-28 00:46:05.342588 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.174) 0:00:16.902 ***** 2026-02-28 00:46:05.342601 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:05.342612 | orchestrator | 2026-02-28 00:46:05.342623 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-28 00:46:05.342635 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.162) 0:00:17.064 ***** 2026-02-28 00:46:05.342646 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:05.342657 | orchestrator | 2026-02-28 00:46:05.342668 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-28 00:46:05.342679 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.144) 0:00:17.209 ***** 2026-02-28 00:46:05.342690 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:46:05.342701 | orchestrator | 2026-02-28 00:46:05.342712 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-28 00:46:05.342723 | orchestrator | Saturday 28 February 2026 00:45:57 +0000 (0:00:00.175) 0:00:17.384 ***** 2026-02-28 00:46:05.342734 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:05.342745 | orchestrator | 2026-02-28 00:46:05.342756 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-28 00:46:05.342767 | orchestrator | Saturday 28 February 2026 00:45:59 +0000 (0:00:01.430) 0:00:18.814 ***** 2026-02-28 00:46:05.342777 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-28 00:46:05.342788 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-28 00:46:05.342801 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-28 00:46:05.342812 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-28 00:46:05.342823 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-28 00:46:05.342834 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-28 00:46:05.342845 | orchestrator | 2026-02-28 00:46:05.342855 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-28 00:46:05.342866 | orchestrator | Saturday 28 February 2026 00:46:01 +0000 (0:00:02.611) 0:00:21.426 ***** 2026-02-28 00:46:05.342877 | orchestrator | ok: [testbed-manager] 2026-02-28 00:46:05.342888 | orchestrator | 2026-02-28 00:46:05.342899 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-28 00:46:05.342910 | orchestrator | Saturday 28 February 2026 00:46:03 +0000 (0:00:01.391) 0:00:22.817 ***** 2026-02-28 00:46:05.342921 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:05.342932 | orchestrator | 2026-02-28 00:46:05.342942 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:46:05.342962 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:46:05.342974 | orchestrator | 2026-02-28 00:46:05.343023 | orchestrator | 2026-02-28 00:46:05.343063 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:46:05.343076 | orchestrator | Saturday 28 February 2026 00:46:04 +0000 (0:00:01.501) 0:00:24.319 ***** 2026-02-28 00:46:05.343087 | orchestrator | =============================================================================== 2026-02-28 00:46:05.343098 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.23s 2026-02-28 00:46:05.343109 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.61s 2026-02-28 00:46:05.343120 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.50s 2026-02-28 00:46:05.343131 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.43s 2026-02-28 00:46:05.343142 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2026-02-28 00:46:05.343152 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.38s 2026-02-28 00:46:05.343163 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.29s 2026-02-28 00:46:05.343174 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.08s 2026-02-28 00:46:05.343185 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.02s 2026-02-28 00:46:05.343196 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-02-28 00:46:05.343207 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-02-28 00:46:05.343218 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.17s 2026-02-28 00:46:05.343229 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.17s 2026-02-28 00:46:05.343239 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-02-28 00:46:05.343250 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-02-28 00:46:05.871931 | orchestrator | 2026-02-28 00:46:05.874044 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Feb 28 00:46:05 UTC 2026 2026-02-28 00:46:05.874093 | orchestrator | 2026-02-28 00:46:08.036311 | orchestrator | 2026-02-28 00:46:08 | INFO  | Collection nutshell is prepared for execution 2026-02-28 00:46:08.036405 | orchestrator | 2026-02-28 00:46:08 | INFO  | A [0] - dotfiles 2026-02-28 00:46:18.126424 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - homer 2026-02-28 00:46:18.126575 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - netdata 2026-02-28 00:46:18.126591 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - openstackclient 2026-02-28 00:46:18.126604 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - phpmyadmin 2026-02-28 00:46:18.126616 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - common 2026-02-28 00:46:18.130311 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- loadbalancer 2026-02-28 00:46:18.130578 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [2] --- opensearch 2026-02-28 00:46:18.130600 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [2] --- mariadb-ng 2026-02-28 00:46:18.130612 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [3] ---- horizon 2026-02-28 00:46:18.130624 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [3] ---- keystone 2026-02-28 00:46:18.130804 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- neutron 2026-02-28 00:46:18.131058 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [5] ------ wait-for-nova 2026-02-28 00:46:18.131081 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [6] ------- octavia 2026-02-28 00:46:18.132798 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- barbican 2026-02-28 00:46:18.133132 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- designate 2026-02-28 00:46:18.133155 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- ironic 2026-02-28 00:46:18.133167 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- placement 2026-02-28 00:46:18.133178 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- magnum 2026-02-28 00:46:18.133803 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- openvswitch 2026-02-28 00:46:18.133962 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [2] --- ovn 2026-02-28 00:46:18.134241 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- memcached 2026-02-28 00:46:18.134466 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- redis 2026-02-28 00:46:18.134488 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- rabbitmq-ng 2026-02-28 00:46:18.134862 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - kubernetes 2026-02-28 00:46:18.137289 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- kubeconfig 2026-02-28 00:46:18.137318 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- copy-kubeconfig 2026-02-28 00:46:18.137424 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [0] - ceph 2026-02-28 00:46:18.139847 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [1] -- ceph-pools 2026-02-28 00:46:18.139945 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [2] --- copy-ceph-keys 2026-02-28 00:46:18.139967 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [3] ---- cephclient 2026-02-28 00:46:18.140025 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-28 00:46:18.140062 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- wait-for-keystone 2026-02-28 00:46:18.140093 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-28 00:46:18.140218 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [5] ------ glance 2026-02-28 00:46:18.140233 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [5] ------ cinder 2026-02-28 00:46:18.140242 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [5] ------ nova 2026-02-28 00:46:18.140567 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [4] ----- prometheus 2026-02-28 00:46:18.140587 | orchestrator | 2026-02-28 00:46:18 | INFO  | A [5] ------ grafana 2026-02-28 00:46:18.358958 | orchestrator | 2026-02-28 00:46:18 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-28 00:46:18.359098 | orchestrator | 2026-02-28 00:46:18 | INFO  | Tasks are running in the background 2026-02-28 00:46:21.479148 | orchestrator | 2026-02-28 00:46:21 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-28 00:46:23.592777 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:23.593303 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:23.596088 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:23.596602 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:23.597774 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:23.598400 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:23.599053 | orchestrator | 2026-02-28 00:46:23 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:23.599103 | orchestrator | 2026-02-28 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:26.636790 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:26.637091 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:26.637724 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:26.638433 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:26.639201 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:26.639828 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:26.640645 | orchestrator | 2026-02-28 00:46:26 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:26.640677 | orchestrator | 2026-02-28 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:29.684187 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:29.684293 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:29.684890 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:29.687355 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:29.687960 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:29.688744 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:29.689401 | orchestrator | 2026-02-28 00:46:29 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:29.693618 | orchestrator | 2026-02-28 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:32.800922 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:32.801285 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:32.801326 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:32.801850 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:32.803064 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:32.809559 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:32.811489 | orchestrator | 2026-02-28 00:46:32 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:32.811558 | orchestrator | 2026-02-28 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:35.890511 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:35.890632 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:35.891272 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:35.891925 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:35.892611 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:35.893192 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:35.893771 | orchestrator | 2026-02-28 00:46:35 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:35.893794 | orchestrator | 2026-02-28 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:39.019243 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:39.022705 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:39.026712 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:39.032543 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:39.033465 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:39.033804 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:39.035614 | orchestrator | 2026-02-28 00:46:39 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:39.035642 | orchestrator | 2026-02-28 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:42.228426 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:42.228547 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:42.233204 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:42.241065 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:42.253659 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:42.253875 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:42.254616 | orchestrator | 2026-02-28 00:46:42 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:42.255402 | orchestrator | 2026-02-28 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:45.352547 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:45.352640 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:45.383995 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:45.384094 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state STARTED 2026-02-28 00:46:45.390102 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:45.404003 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:45.420803 | orchestrator | 2026-02-28 00:46:45 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:45.420861 | orchestrator | 2026-02-28 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:48.640947 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:48.649869 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:48.652278 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:48.657129 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task 4ec990e2-4d05-49d6-9647-723f3bf3b22e is in state SUCCESS 2026-02-28 00:46:48.657935 | orchestrator | 2026-02-28 00:46:48.658000 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-28 00:46:48.658059 | orchestrator | 2026-02-28 00:46:48.658073 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-28 00:46:48.658086 | orchestrator | Saturday 28 February 2026 00:46:31 +0000 (0:00:00.364) 0:00:00.364 ***** 2026-02-28 00:46:48.658098 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:46:48.658110 | orchestrator | changed: [testbed-manager] 2026-02-28 00:46:48.658121 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:46:48.658132 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:46:48.658143 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:46:48.658155 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:46:48.658166 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:46:48.658177 | orchestrator | 2026-02-28 00:46:48.658188 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-28 00:46:48.658199 | orchestrator | Saturday 28 February 2026 00:46:35 +0000 (0:00:04.370) 0:00:04.734 ***** 2026-02-28 00:46:48.658211 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:46:48.658223 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:46:48.658234 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:46:48.658245 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:46:48.658257 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:46:48.658268 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:46:48.658279 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:46:48.658290 | orchestrator | 2026-02-28 00:46:48.658301 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-28 00:46:48.658312 | orchestrator | Saturday 28 February 2026 00:46:37 +0000 (0:00:02.049) 0:00:06.784 ***** 2026-02-28 00:46:48.658328 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:36.560911', 'end': '2026-02-28 00:46:36.568864', 'delta': '0:00:00.007953', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658352 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:36.571309', 'end': '2026-02-28 00:46:36.577656', 'delta': '0:00:00.006347', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658397 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:36.579190', 'end': '2026-02-28 00:46:36.587131', 'delta': '0:00:00.007941', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658445 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:36.550122', 'end': '2026-02-28 00:46:36.558059', 'delta': '0:00:00.007937', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658469 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:36.680810', 'end': '2026-02-28 00:46:36.685508', 'delta': '0:00:00.004698', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658489 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:37.161096', 'end': '2026-02-28 00:46:37.169913', 'delta': '0:00:00.008817', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658504 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-28 00:46:36.870379', 'end': '2026-02-28 00:46:36.878117', 'delta': '0:00:00.007738', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-28 00:46:48.658525 | orchestrator | 2026-02-28 00:46:48.658537 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-28 00:46:48.658548 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:01.871) 0:00:08.656 ***** 2026-02-28 00:46:48.658559 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:46:48.658570 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:46:48.658581 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:46:48.658592 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:46:48.658603 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:46:48.658613 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:46:48.658624 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:46:48.658635 | orchestrator | 2026-02-28 00:46:48.658651 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-28 00:46:48.658663 | orchestrator | Saturday 28 February 2026 00:46:41 +0000 (0:00:02.325) 0:00:10.982 ***** 2026-02-28 00:46:48.658674 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-28 00:46:48.658685 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-28 00:46:48.658695 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-28 00:46:48.658706 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-28 00:46:48.658717 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-28 00:46:48.658727 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-28 00:46:48.658738 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-28 00:46:48.658749 | orchestrator | 2026-02-28 00:46:48.658759 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:46:48.658777 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658790 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658801 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658812 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658823 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658834 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658845 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:46:48.658856 | orchestrator | 2026-02-28 00:46:48.658867 | orchestrator | 2026-02-28 00:46:48.658879 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:46:48.658890 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:03.331) 0:00:14.313 ***** 2026-02-28 00:46:48.658901 | orchestrator | =============================================================================== 2026-02-28 00:46:48.658912 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.37s 2026-02-28 00:46:48.658923 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.33s 2026-02-28 00:46:48.658940 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.33s 2026-02-28 00:46:48.658951 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.05s 2026-02-28 00:46:48.659011 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.87s 2026-02-28 00:46:48.663695 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:48.683933 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:46:48.690269 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:48.719580 | orchestrator | 2026-02-28 00:46:48 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:48.719724 | orchestrator | 2026-02-28 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:52.235773 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:52.240110 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:52.243583 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:52.247233 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:52.261472 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:46:52.312685 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:52.323641 | orchestrator | 2026-02-28 00:46:52 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:52.323725 | orchestrator | 2026-02-28 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:55.556570 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:55.558397 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:55.561598 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:55.564160 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:55.565454 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:46:55.567799 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:55.576867 | orchestrator | 2026-02-28 00:46:55 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:55.576910 | orchestrator | 2026-02-28 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:46:58.658532 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:46:58.658579 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:46:58.658585 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:46:58.658590 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:46:58.658595 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:46:58.658615 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:46:58.658621 | orchestrator | 2026-02-28 00:46:58 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:46:58.658626 | orchestrator | 2026-02-28 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:01.742460 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:47:01.742894 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:01.743754 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:01.746722 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:01.747315 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:01.748054 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:01.751664 | orchestrator | 2026-02-28 00:47:01 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:01.751692 | orchestrator | 2026-02-28 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:04.879076 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:47:04.886879 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:04.890388 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:04.894699 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:04.899611 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:04.904167 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:04.927305 | orchestrator | 2026-02-28 00:47:04 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:04.927364 | orchestrator | 2026-02-28 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:08.316401 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:47:08.316479 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:08.316489 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:08.316497 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:08.316504 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:08.316511 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:08.316518 | orchestrator | 2026-02-28 00:47:08 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:08.316525 | orchestrator | 2026-02-28 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:11.148848 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state STARTED 2026-02-28 00:47:11.153531 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:11.155579 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:11.156817 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:11.159604 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:11.161192 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:11.163506 | orchestrator | 2026-02-28 00:47:11 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:11.163583 | orchestrator | 2026-02-28 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:14.350631 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task dee03a80-a500-4cfb-a98b-4e68ee1bc4cf is in state SUCCESS 2026-02-28 00:47:14.350775 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:14.351503 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:14.355320 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:14.358575 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:14.360089 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:14.361346 | orchestrator | 2026-02-28 00:47:14 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:14.363011 | orchestrator | 2026-02-28 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:17.497439 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:17.499248 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:17.502128 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:17.504137 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:17.505776 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:17.508915 | orchestrator | 2026-02-28 00:47:17 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:17.508960 | orchestrator | 2026-02-28 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:20.642252 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:20.642328 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:20.642340 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:20.642349 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:20.642358 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:20.642367 | orchestrator | 2026-02-28 00:47:20 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:20.642398 | orchestrator | 2026-02-28 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:23.837319 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:23.837392 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:23.837402 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:23.837416 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:23.837424 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:23.837431 | orchestrator | 2026-02-28 00:47:23 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:23.837438 | orchestrator | 2026-02-28 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:26.803654 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:26.809645 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:26.813866 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:26.861170 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:26.861269 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state STARTED 2026-02-28 00:47:26.861319 | orchestrator | 2026-02-28 00:47:26 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:26.861333 | orchestrator | 2026-02-28 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:30.086369 | orchestrator | 2026-02-28 00:47:30 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:30.094350 | orchestrator | 2026-02-28 00:47:30 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:30.099374 | orchestrator | 2026-02-28 00:47:30 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:30.099448 | orchestrator | 2026-02-28 00:47:30 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:30.100604 | orchestrator | 2026-02-28 00:47:30 | INFO  | Task 31ce8ded-8960-40dc-b7ba-f65dbda56632 is in state SUCCESS 2026-02-28 00:47:30.103341 | orchestrator | 2026-02-28 00:47:30 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:30.103390 | orchestrator | 2026-02-28 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:33.165921 | orchestrator | 2026-02-28 00:47:33 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:33.167979 | orchestrator | 2026-02-28 00:47:33 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:33.176860 | orchestrator | 2026-02-28 00:47:33 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:33.184537 | orchestrator | 2026-02-28 00:47:33 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:33.189358 | orchestrator | 2026-02-28 00:47:33 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:33.193288 | orchestrator | 2026-02-28 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:36.246439 | orchestrator | 2026-02-28 00:47:36 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:36.246647 | orchestrator | 2026-02-28 00:47:36 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:36.247836 | orchestrator | 2026-02-28 00:47:36 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:36.248606 | orchestrator | 2026-02-28 00:47:36 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:36.249932 | orchestrator | 2026-02-28 00:47:36 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:36.249990 | orchestrator | 2026-02-28 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:39.308902 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:39.311838 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:39.316700 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:39.327686 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:39.328497 | orchestrator | 2026-02-28 00:47:39 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:39.328558 | orchestrator | 2026-02-28 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:42.378307 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:42.379511 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:42.381787 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:42.383457 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:42.385253 | orchestrator | 2026-02-28 00:47:42 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:42.385286 | orchestrator | 2026-02-28 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:45.459892 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:45.461580 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:45.569241 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:45.569374 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:45.589036 | orchestrator | 2026-02-28 00:47:45 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:45.589087 | orchestrator | 2026-02-28 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:48.680514 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:48.680554 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:48.682163 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:48.684318 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:48.689496 | orchestrator | 2026-02-28 00:47:48 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:48.689570 | orchestrator | 2026-02-28 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:51.732309 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:51.733564 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:51.738069 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:51.741716 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:51.744268 | orchestrator | 2026-02-28 00:47:51 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:51.744323 | orchestrator | 2026-02-28 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:54.848920 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:54.852167 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:54.852778 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:54.853465 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:54.854564 | orchestrator | 2026-02-28 00:47:54 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:54.854618 | orchestrator | 2026-02-28 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:47:57.922568 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:47:57.924479 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:47:57.926359 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:47:57.928915 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:47:57.930164 | orchestrator | 2026-02-28 00:47:57 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:47:57.931766 | orchestrator | 2026-02-28 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:01.034361 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:01.035208 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:01.037307 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:01.041106 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:48:01.041182 | orchestrator | 2026-02-28 00:48:01 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:01.041204 | orchestrator | 2026-02-28 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:04.130258 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:04.133340 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:04.141094 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:04.143497 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state STARTED 2026-02-28 00:48:04.144366 | orchestrator | 2026-02-28 00:48:04 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:04.145839 | orchestrator | 2026-02-28 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:07.220529 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:07.225247 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:07.225338 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:07.225419 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 3695deea-048b-4dce-b2b3-3538ba128c20 is in state SUCCESS 2026-02-28 00:48:07.226765 | orchestrator | 2026-02-28 00:48:07.226823 | orchestrator | 2026-02-28 00:48:07.226842 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-28 00:48:07.226861 | orchestrator | 2026-02-28 00:48:07.226879 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-28 00:48:07.226896 | orchestrator | Saturday 28 February 2026 00:46:29 +0000 (0:00:00.559) 0:00:00.559 ***** 2026-02-28 00:48:07.226916 | orchestrator | ok: [testbed-manager] => { 2026-02-28 00:48:07.226965 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-28 00:48:07.226987 | orchestrator | } 2026-02-28 00:48:07.227005 | orchestrator | 2026-02-28 00:48:07.227024 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-28 00:48:07.227043 | orchestrator | Saturday 28 February 2026 00:46:30 +0000 (0:00:00.467) 0:00:01.027 ***** 2026-02-28 00:48:07.227062 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.227081 | orchestrator | 2026-02-28 00:48:07.227102 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-28 00:48:07.227120 | orchestrator | Saturday 28 February 2026 00:46:31 +0000 (0:00:01.632) 0:00:02.659 ***** 2026-02-28 00:48:07.227140 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-28 00:48:07.227160 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-28 00:48:07.227179 | orchestrator | 2026-02-28 00:48:07.227257 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-28 00:48:07.227268 | orchestrator | Saturday 28 February 2026 00:46:33 +0000 (0:00:01.515) 0:00:04.175 ***** 2026-02-28 00:48:07.227279 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.227290 | orchestrator | 2026-02-28 00:48:07.227301 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-28 00:48:07.227312 | orchestrator | Saturday 28 February 2026 00:46:36 +0000 (0:00:03.232) 0:00:07.407 ***** 2026-02-28 00:48:07.227323 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.227334 | orchestrator | 2026-02-28 00:48:07.227345 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-28 00:48:07.227356 | orchestrator | Saturday 28 February 2026 00:46:38 +0000 (0:00:02.527) 0:00:09.935 ***** 2026-02-28 00:48:07.227367 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-28 00:48:07.227378 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.227389 | orchestrator | 2026-02-28 00:48:07.227400 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-28 00:48:07.227411 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:28.125) 0:00:38.061 ***** 2026-02-28 00:48:07.227422 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.227433 | orchestrator | 2026-02-28 00:48:07.227444 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:07.227478 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:07.227490 | orchestrator | 2026-02-28 00:48:07.227501 | orchestrator | 2026-02-28 00:48:07.227521 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:07.227540 | orchestrator | Saturday 28 February 2026 00:47:12 +0000 (0:00:05.209) 0:00:43.270 ***** 2026-02-28 00:48:07.227557 | orchestrator | =============================================================================== 2026-02-28 00:48:07.227568 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.13s 2026-02-28 00:48:07.227579 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.21s 2026-02-28 00:48:07.227590 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.23s 2026-02-28 00:48:07.227600 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.53s 2026-02-28 00:48:07.227611 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.63s 2026-02-28 00:48:07.227623 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.52s 2026-02-28 00:48:07.227634 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.47s 2026-02-28 00:48:07.227645 | orchestrator | 2026-02-28 00:48:07.227656 | orchestrator | 2026-02-28 00:48:07.227667 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-28 00:48:07.227678 | orchestrator | 2026-02-28 00:48:07.227689 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-28 00:48:07.227699 | orchestrator | Saturday 28 February 2026 00:46:32 +0000 (0:00:00.781) 0:00:00.781 ***** 2026-02-28 00:48:07.227710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-28 00:48:07.227722 | orchestrator | 2026-02-28 00:48:07.227733 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-28 00:48:07.227753 | orchestrator | Saturday 28 February 2026 00:46:32 +0000 (0:00:00.323) 0:00:01.105 ***** 2026-02-28 00:48:07.227764 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-28 00:48:07.227775 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-28 00:48:07.227786 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-28 00:48:07.227797 | orchestrator | 2026-02-28 00:48:07.227808 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-28 00:48:07.227819 | orchestrator | Saturday 28 February 2026 00:46:34 +0000 (0:00:01.524) 0:00:02.629 ***** 2026-02-28 00:48:07.227830 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.227840 | orchestrator | 2026-02-28 00:48:07.227852 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-28 00:48:07.227863 | orchestrator | Saturday 28 February 2026 00:46:36 +0000 (0:00:02.272) 0:00:04.901 ***** 2026-02-28 00:48:07.227891 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-28 00:48:07.227903 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.227915 | orchestrator | 2026-02-28 00:48:07.227954 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-28 00:48:07.227975 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:39.562) 0:00:44.464 ***** 2026-02-28 00:48:07.227995 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.228013 | orchestrator | 2026-02-28 00:48:07.228029 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-28 00:48:07.228039 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:01.735) 0:00:46.199 ***** 2026-02-28 00:48:07.228050 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.228062 | orchestrator | 2026-02-28 00:48:07.228073 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-28 00:48:07.228084 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:01.125) 0:00:47.325 ***** 2026-02-28 00:48:07.228104 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.228115 | orchestrator | 2026-02-28 00:48:07.228126 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-28 00:48:07.228137 | orchestrator | Saturday 28 February 2026 00:47:23 +0000 (0:00:04.473) 0:00:51.798 ***** 2026-02-28 00:48:07.228148 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.228159 | orchestrator | 2026-02-28 00:48:07.228170 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-28 00:48:07.228181 | orchestrator | Saturday 28 February 2026 00:47:26 +0000 (0:00:02.950) 0:00:54.749 ***** 2026-02-28 00:48:07.228192 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.228203 | orchestrator | 2026-02-28 00:48:07.228214 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-28 00:48:07.228225 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:01.521) 0:00:56.271 ***** 2026-02-28 00:48:07.228236 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.228247 | orchestrator | 2026-02-28 00:48:07.228258 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:07.228269 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:07.228280 | orchestrator | 2026-02-28 00:48:07.228292 | orchestrator | 2026-02-28 00:48:07.228302 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:07.228313 | orchestrator | Saturday 28 February 2026 00:47:28 +0000 (0:00:00.549) 0:00:56.821 ***** 2026-02-28 00:48:07.228324 | orchestrator | =============================================================================== 2026-02-28 00:48:07.228335 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 39.56s 2026-02-28 00:48:07.228346 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.47s 2026-02-28 00:48:07.228357 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.95s 2026-02-28 00:48:07.228368 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.27s 2026-02-28 00:48:07.228379 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.74s 2026-02-28 00:48:07.228390 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.52s 2026-02-28 00:48:07.228401 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.52s 2026-02-28 00:48:07.228412 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.13s 2026-02-28 00:48:07.228423 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.55s 2026-02-28 00:48:07.228434 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.32s 2026-02-28 00:48:07.228444 | orchestrator | 2026-02-28 00:48:07.228456 | orchestrator | 2026-02-28 00:48:07.228466 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-28 00:48:07.228477 | orchestrator | 2026-02-28 00:48:07.228488 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-28 00:48:07.228499 | orchestrator | Saturday 28 February 2026 00:46:54 +0000 (0:00:00.460) 0:00:00.460 ***** 2026-02-28 00:48:07.228510 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.228521 | orchestrator | 2026-02-28 00:48:07.228532 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-28 00:48:07.228543 | orchestrator | Saturday 28 February 2026 00:46:55 +0000 (0:00:01.263) 0:00:01.723 ***** 2026-02-28 00:48:07.228554 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-28 00:48:07.228565 | orchestrator | 2026-02-28 00:48:07.228576 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-28 00:48:07.228587 | orchestrator | Saturday 28 February 2026 00:46:56 +0000 (0:00:00.894) 0:00:02.618 ***** 2026-02-28 00:48:07.228597 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.228608 | orchestrator | 2026-02-28 00:48:07.228619 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-28 00:48:07.228642 | orchestrator | Saturday 28 February 2026 00:46:58 +0000 (0:00:01.616) 0:00:04.234 ***** 2026-02-28 00:48:07.228654 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-28 00:48:07.228665 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:07.228676 | orchestrator | 2026-02-28 00:48:07.228687 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-28 00:48:07.228698 | orchestrator | Saturday 28 February 2026 00:47:58 +0000 (0:01:00.445) 0:01:04.680 ***** 2026-02-28 00:48:07.228709 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:07.228720 | orchestrator | 2026-02-28 00:48:07.228731 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:07.228742 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:07.228753 | orchestrator | 2026-02-28 00:48:07.228764 | orchestrator | 2026-02-28 00:48:07.228775 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:07.228794 | orchestrator | Saturday 28 February 2026 00:48:04 +0000 (0:00:05.178) 0:01:09.858 ***** 2026-02-28 00:48:07.228806 | orchestrator | =============================================================================== 2026-02-28 00:48:07.228817 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.45s 2026-02-28 00:48:07.228827 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.18s 2026-02-28 00:48:07.228838 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.62s 2026-02-28 00:48:07.228849 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.26s 2026-02-28 00:48:07.228860 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.89s 2026-02-28 00:48:07.229058 | orchestrator | 2026-02-28 00:48:07 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:07.229085 | orchestrator | 2026-02-28 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:10.285591 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:10.289278 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:10.290339 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:10.290363 | orchestrator | 2026-02-28 00:48:10 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:10.290372 | orchestrator | 2026-02-28 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:13.340856 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:13.341138 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:13.341775 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:13.342354 | orchestrator | 2026-02-28 00:48:13 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:13.342407 | orchestrator | 2026-02-28 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:16.408012 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:16.411411 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:16.412389 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:16.414759 | orchestrator | 2026-02-28 00:48:16 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:16.414816 | orchestrator | 2026-02-28 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:19.482292 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:19.492830 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:19.500767 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:19.505139 | orchestrator | 2026-02-28 00:48:19 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:19.505180 | orchestrator | 2026-02-28 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:22.542295 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:22.543078 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:22.544781 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:22.546592 | orchestrator | 2026-02-28 00:48:22 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state STARTED 2026-02-28 00:48:22.546629 | orchestrator | 2026-02-28 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:25.590821 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:25.592562 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:25.593315 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:25.594457 | orchestrator | 2026-02-28 00:48:25.594485 | orchestrator | 2026-02-28 00:48:25 | INFO  | Task 25cbf4ca-0296-463d-9a83-9922a531f1be is in state SUCCESS 2026-02-28 00:48:25.595393 | orchestrator | 2026-02-28 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:25.595741 | orchestrator | 2026-02-28 00:48:25.595771 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:48:25.595783 | orchestrator | 2026-02-28 00:48:25.595795 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:48:25.595807 | orchestrator | Saturday 28 February 2026 00:46:32 +0000 (0:00:00.402) 0:00:00.402 ***** 2026-02-28 00:48:25.595819 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-28 00:48:25.595831 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-28 00:48:25.595842 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-28 00:48:25.595852 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-28 00:48:25.595863 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-28 00:48:25.595874 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-28 00:48:25.595957 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-28 00:48:25.595971 | orchestrator | 2026-02-28 00:48:25.595981 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-28 00:48:25.595991 | orchestrator | 2026-02-28 00:48:25.596001 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-28 00:48:25.596011 | orchestrator | Saturday 28 February 2026 00:46:34 +0000 (0:00:01.698) 0:00:02.100 ***** 2026-02-28 00:48:25.596035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:25.596073 | orchestrator | 2026-02-28 00:48:25.596086 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-28 00:48:25.596098 | orchestrator | Saturday 28 February 2026 00:46:37 +0000 (0:00:02.507) 0:00:04.608 ***** 2026-02-28 00:48:25.596109 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:25.596121 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:25.596131 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:25.596141 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:25.596152 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:25.596162 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:25.596171 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:25.596181 | orchestrator | 2026-02-28 00:48:25.596192 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-28 00:48:25.596202 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:02.309) 0:00:06.917 ***** 2026-02-28 00:48:25.596213 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:25.596226 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:25.596237 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:25.596248 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:25.596259 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:25.596270 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:25.596282 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:25.596293 | orchestrator | 2026-02-28 00:48:25.596305 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-28 00:48:25.596315 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:03.603) 0:00:10.520 ***** 2026-02-28 00:48:25.596325 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:25.596336 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:25.596346 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:25.596357 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:25.596369 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:25.596380 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:25.596392 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:25.596403 | orchestrator | 2026-02-28 00:48:25.596414 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-28 00:48:25.596425 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:02.243) 0:00:12.763 ***** 2026-02-28 00:48:25.596437 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:25.596448 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:25.596460 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:25.596471 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:25.596483 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:25.596494 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:25.596503 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:25.596513 | orchestrator | 2026-02-28 00:48:25.596524 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-28 00:48:25.596534 | orchestrator | Saturday 28 February 2026 00:46:59 +0000 (0:00:14.502) 0:00:27.266 ***** 2026-02-28 00:48:25.596544 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:25.596559 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:25.596570 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:25.596582 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:25.596595 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:25.596609 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:25.596622 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:25.596635 | orchestrator | 2026-02-28 00:48:25.596656 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-28 00:48:25.596670 | orchestrator | Saturday 28 February 2026 00:47:49 +0000 (0:00:50.302) 0:01:17.568 ***** 2026-02-28 00:48:25.596681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:25.596704 | orchestrator | 2026-02-28 00:48:25.596714 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-28 00:48:25.596724 | orchestrator | Saturday 28 February 2026 00:47:51 +0000 (0:00:01.910) 0:01:19.479 ***** 2026-02-28 00:48:25.596736 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-28 00:48:25.596747 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-28 00:48:25.596757 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-28 00:48:25.596768 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-28 00:48:25.596795 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-28 00:48:25.596807 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-28 00:48:25.596817 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-28 00:48:25.596828 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-28 00:48:25.596837 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-28 00:48:25.596848 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-28 00:48:25.596858 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-28 00:48:25.596869 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-28 00:48:25.596879 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-28 00:48:25.596890 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-28 00:48:25.596901 | orchestrator | 2026-02-28 00:48:25.596936 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-28 00:48:25.596948 | orchestrator | Saturday 28 February 2026 00:47:57 +0000 (0:00:05.728) 0:01:25.207 ***** 2026-02-28 00:48:25.596958 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:25.596969 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:25.596979 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:25.596990 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:25.597001 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:25.597012 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:25.597023 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:25.597035 | orchestrator | 2026-02-28 00:48:25.597045 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-28 00:48:25.597057 | orchestrator | Saturday 28 February 2026 00:47:59 +0000 (0:00:01.669) 0:01:26.877 ***** 2026-02-28 00:48:25.597069 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:25.597080 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:25.597091 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:25.597102 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:25.597113 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:25.597129 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:25.597140 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:25.597150 | orchestrator | 2026-02-28 00:48:25.597162 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-28 00:48:25.597174 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:01.773) 0:01:28.651 ***** 2026-02-28 00:48:25.597185 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:25.597197 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:25.597209 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:25.597220 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:25.597232 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:25.597243 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:25.597252 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:25.597262 | orchestrator | 2026-02-28 00:48:25.597273 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-28 00:48:25.597283 | orchestrator | Saturday 28 February 2026 00:48:02 +0000 (0:00:01.433) 0:01:30.084 ***** 2026-02-28 00:48:25.597293 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:25.597303 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:25.597312 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:25.597322 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:25.597345 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:25.597355 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:25.597365 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:25.597375 | orchestrator | 2026-02-28 00:48:25.597385 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-28 00:48:25.597395 | orchestrator | Saturday 28 February 2026 00:48:05 +0000 (0:00:03.201) 0:01:33.285 ***** 2026-02-28 00:48:25.597405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-28 00:48:25.597419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:25.597430 | orchestrator | 2026-02-28 00:48:25.597440 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-28 00:48:25.597450 | orchestrator | Saturday 28 February 2026 00:48:07 +0000 (0:00:01.938) 0:01:35.225 ***** 2026-02-28 00:48:25.597460 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:25.597470 | orchestrator | 2026-02-28 00:48:25.597480 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-28 00:48:25.597490 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:02.658) 0:01:37.883 ***** 2026-02-28 00:48:25.597499 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:25.597509 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:25.597518 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:25.597528 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:25.597538 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:25.597547 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:25.597565 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:25.597575 | orchestrator | 2026-02-28 00:48:25.597585 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:25.597595 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597606 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597616 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597627 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597648 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597658 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597668 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:48:25.597679 | orchestrator | 2026-02-28 00:48:25.597688 | orchestrator | 2026-02-28 00:48:25.597698 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:25.597708 | orchestrator | Saturday 28 February 2026 00:48:21 +0000 (0:00:11.448) 0:01:49.331 ***** 2026-02-28 00:48:25.597719 | orchestrator | =============================================================================== 2026-02-28 00:48:25.597730 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 50.30s 2026-02-28 00:48:25.597741 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.50s 2026-02-28 00:48:25.597751 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.45s 2026-02-28 00:48:25.597761 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.73s 2026-02-28 00:48:25.597780 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.60s 2026-02-28 00:48:25.597790 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.20s 2026-02-28 00:48:25.597799 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.66s 2026-02-28 00:48:25.597809 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.51s 2026-02-28 00:48:25.597818 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.31s 2026-02-28 00:48:25.597828 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.24s 2026-02-28 00:48:25.597839 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.94s 2026-02-28 00:48:25.597849 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.91s 2026-02-28 00:48:25.597858 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.77s 2026-02-28 00:48:25.597867 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.70s 2026-02-28 00:48:25.597875 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.67s 2026-02-28 00:48:25.597884 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.43s 2026-02-28 00:48:28.633729 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:28.636230 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:28.638856 | orchestrator | 2026-02-28 00:48:28 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:28.638922 | orchestrator | 2026-02-28 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:31.684658 | orchestrator | 2026-02-28 00:48:31 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:31.685457 | orchestrator | 2026-02-28 00:48:31 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:31.687413 | orchestrator | 2026-02-28 00:48:31 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:31.687454 | orchestrator | 2026-02-28 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:34.733354 | orchestrator | 2026-02-28 00:48:34 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:34.734063 | orchestrator | 2026-02-28 00:48:34 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:34.734588 | orchestrator | 2026-02-28 00:48:34 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:34.734619 | orchestrator | 2026-02-28 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:37.788111 | orchestrator | 2026-02-28 00:48:37 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:37.788339 | orchestrator | 2026-02-28 00:48:37 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:37.790415 | orchestrator | 2026-02-28 00:48:37 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:37.790481 | orchestrator | 2026-02-28 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:40.833742 | orchestrator | 2026-02-28 00:48:40 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:40.833843 | orchestrator | 2026-02-28 00:48:40 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:40.834745 | orchestrator | 2026-02-28 00:48:40 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:40.834835 | orchestrator | 2026-02-28 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:43.860807 | orchestrator | 2026-02-28 00:48:43 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:43.861370 | orchestrator | 2026-02-28 00:48:43 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:43.861694 | orchestrator | 2026-02-28 00:48:43 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:43.861708 | orchestrator | 2026-02-28 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:46.911715 | orchestrator | 2026-02-28 00:48:46 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:46.911795 | orchestrator | 2026-02-28 00:48:46 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:46.912949 | orchestrator | 2026-02-28 00:48:46 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:46.912985 | orchestrator | 2026-02-28 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:49.961111 | orchestrator | 2026-02-28 00:48:49 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:49.962615 | orchestrator | 2026-02-28 00:48:49 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:49.963731 | orchestrator | 2026-02-28 00:48:49 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:49.963765 | orchestrator | 2026-02-28 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:52.989224 | orchestrator | 2026-02-28 00:48:52 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:52.990484 | orchestrator | 2026-02-28 00:48:52 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:52.992104 | orchestrator | 2026-02-28 00:48:52 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:52.992294 | orchestrator | 2026-02-28 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:56.022464 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:56.022588 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:56.023481 | orchestrator | 2026-02-28 00:48:56 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state STARTED 2026-02-28 00:48:56.023543 | orchestrator | 2026-02-28 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:48:59.054588 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:48:59.054683 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:48:59.055301 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state STARTED 2026-02-28 00:48:59.056062 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:48:59.056960 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:48:59.060847 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 46b800f5-e80c-4d94-b227-f65a9e561ba2 is in state SUCCESS 2026-02-28 00:48:59.062335 | orchestrator | 2026-02-28 00:48:59.062427 | orchestrator | 2026-02-28 00:48:59.062446 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-28 00:48:59.062492 | orchestrator | 2026-02-28 00:48:59.062501 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-28 00:48:59.062522 | orchestrator | Saturday 28 February 2026 00:46:23 +0000 (0:00:00.212) 0:00:00.212 ***** 2026-02-28 00:48:59.062528 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:59.062534 | orchestrator | 2026-02-28 00:48:59.062538 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-28 00:48:59.062542 | orchestrator | Saturday 28 February 2026 00:46:24 +0000 (0:00:01.106) 0:00:01.319 ***** 2026-02-28 00:48:59.062546 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062550 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062554 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062558 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062562 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062566 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062588 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062593 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-28 00:48:59.062597 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062600 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062604 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062608 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062612 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062616 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062620 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062624 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062628 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062631 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-28 00:48:59.062635 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062639 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062643 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-28 00:48:59.062647 | orchestrator | 2026-02-28 00:48:59.062650 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-28 00:48:59.062654 | orchestrator | Saturday 28 February 2026 00:46:28 +0000 (0:00:03.975) 0:00:05.294 ***** 2026-02-28 00:48:59.062658 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:48:59.062664 | orchestrator | 2026-02-28 00:48:59.062668 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-28 00:48:59.062672 | orchestrator | Saturday 28 February 2026 00:46:29 +0000 (0:00:01.384) 0:00:06.678 ***** 2026-02-28 00:48:59.062679 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062746 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062812 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.062841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.062884 | orchestrator | 2026-02-28 00:48:59.062891 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-28 00:48:59.062896 | orchestrator | Saturday 28 February 2026 00:46:35 +0000 (0:00:06.095) 0:00:12.773 ***** 2026-02-28 00:48:59.062901 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.062906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.062914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.062919 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:48:59.062923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.062979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.062986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.062991 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:48:59.062996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063014 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:48:59.063018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063032 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:48:59.063042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063055 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:48:59.063060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063077 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:48:59.063081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063100 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:48:59.063105 | orchestrator | 2026-02-28 00:48:59.063111 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-28 00:48:59.063120 | orchestrator | Saturday 28 February 2026 00:46:37 +0000 (0:00:01.721) 0:00:14.495 ***** 2026-02-28 00:48:59.063128 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063136 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063147 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063154 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:48:59.063160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063180 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:48:59.063198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063245 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:48:59.063252 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:48:59.063261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063877 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:48:59.063881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063885 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:48:59.063889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-28 00:48:59.063893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.063901 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:48:59.063904 | orchestrator | 2026-02-28 00:48:59.063908 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-28 00:48:59.063913 | orchestrator | Saturday 28 February 2026 00:46:40 +0000 (0:00:03.314) 0:00:17.810 ***** 2026-02-28 00:48:59.063916 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:48:59.063920 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:48:59.063924 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:48:59.063928 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:48:59.063931 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:48:59.063958 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:48:59.063962 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:48:59.063966 | orchestrator | 2026-02-28 00:48:59.063970 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-28 00:48:59.063974 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:01.648) 0:00:19.459 ***** 2026-02-28 00:48:59.063980 | orchestrator | skipping: [testbed-manager] 2026-02-28 00:48:59.063984 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:48:59.063987 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:48:59.063991 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:48:59.063997 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:48:59.064001 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:48:59.064005 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:48:59.064009 | orchestrator | 2026-02-28 00:48:59.064013 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-28 00:48:59.064016 | orchestrator | Saturday 28 February 2026 00:46:44 +0000 (0:00:01.771) 0:00:21.230 ***** 2026-02-28 00:48:59.064021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064029 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064056 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064094 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064122 | orchestrator | 2026-02-28 00:48:59.064126 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-28 00:48:59.064130 | orchestrator | Saturday 28 February 2026 00:46:55 +0000 (0:00:11.381) 0:00:32.611 ***** 2026-02-28 00:48:59.064134 | orchestrator | [WARNING]: Skipped 2026-02-28 00:48:59.064139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-28 00:48:59.064146 | orchestrator | to this access issue: 2026-02-28 00:48:59.064150 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-28 00:48:59.064154 | orchestrator | directory 2026-02-28 00:48:59.064158 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:48:59.064162 | orchestrator | 2026-02-28 00:48:59.064166 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-28 00:48:59.064170 | orchestrator | Saturday 28 February 2026 00:46:58 +0000 (0:00:02.677) 0:00:35.289 ***** 2026-02-28 00:48:59.064173 | orchestrator | [WARNING]: Skipped 2026-02-28 00:48:59.064177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-28 00:48:59.064183 | orchestrator | to this access issue: 2026-02-28 00:48:59.064187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-28 00:48:59.064191 | orchestrator | directory 2026-02-28 00:48:59.064195 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:48:59.064199 | orchestrator | 2026-02-28 00:48:59.064204 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-28 00:48:59.064208 | orchestrator | Saturday 28 February 2026 00:46:59 +0000 (0:00:01.162) 0:00:36.452 ***** 2026-02-28 00:48:59.064212 | orchestrator | [WARNING]: Skipped 2026-02-28 00:48:59.064216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-28 00:48:59.064220 | orchestrator | to this access issue: 2026-02-28 00:48:59.064224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-28 00:48:59.064227 | orchestrator | directory 2026-02-28 00:48:59.064231 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:48:59.064235 | orchestrator | 2026-02-28 00:48:59.064239 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-28 00:48:59.064243 | orchestrator | Saturday 28 February 2026 00:47:00 +0000 (0:00:01.464) 0:00:37.916 ***** 2026-02-28 00:48:59.064246 | orchestrator | [WARNING]: Skipped 2026-02-28 00:48:59.064250 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-28 00:48:59.064254 | orchestrator | to this access issue: 2026-02-28 00:48:59.064258 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-28 00:48:59.064262 | orchestrator | directory 2026-02-28 00:48:59.064265 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 00:48:59.064269 | orchestrator | 2026-02-28 00:48:59.064273 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-28 00:48:59.064277 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:00.883) 0:00:38.800 ***** 2026-02-28 00:48:59.064280 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.064284 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.064288 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.064292 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.064296 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.064299 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.064303 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.064307 | orchestrator | 2026-02-28 00:48:59.064311 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-28 00:48:59.064314 | orchestrator | Saturday 28 February 2026 00:47:10 +0000 (0:00:08.702) 0:00:47.502 ***** 2026-02-28 00:48:59.064318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064322 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064337 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064341 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064344 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-28 00:48:59.064348 | orchestrator | 2026-02-28 00:48:59.064352 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-28 00:48:59.064356 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:07.638) 0:00:55.141 ***** 2026-02-28 00:48:59.064359 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.064363 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.064367 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.064371 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.064375 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.064378 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.064382 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.064386 | orchestrator | 2026-02-28 00:48:59.064390 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-28 00:48:59.064393 | orchestrator | Saturday 28 February 2026 00:47:21 +0000 (0:00:03.476) 0:00:58.617 ***** 2026-02-28 00:48:59.064397 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064418 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064428 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064433 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064445 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064454 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064461 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064466 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064475 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:48:59.064493 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064498 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064503 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064511 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064516 | orchestrator | 2026-02-28 00:48:59.064520 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-28 00:48:59.064524 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:02.550) 0:01:01.167 ***** 2026-02-28 00:48:59.064529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064533 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064538 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064542 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064545 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064549 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064553 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-28 00:48:59.064557 | orchestrator | 2026-02-28 00:48:59.064560 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-28 00:48:59.064564 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:03.773) 0:01:04.940 ***** 2026-02-28 00:48:59.064568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064579 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064583 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064587 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064590 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-28 00:48:59.064594 | orchestrator | 2026-02-28 00:48:59.064598 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-28 00:48:59.064602 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:03.353) 0:01:08.294 ***** 2026-02-28 00:48:59.064606 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064629 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064661 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-28 00:48:59.064681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:48:59.064712 | orchestrator | 2026-02-28 00:48:59.064716 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-28 00:48:59.064720 | orchestrator | Saturday 28 February 2026 00:47:35 +0000 (0:00:04.422) 0:01:12.717 ***** 2026-02-28 00:48:59.064724 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.064727 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.064731 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.064735 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.064739 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.064742 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.064746 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.064750 | orchestrator | 2026-02-28 00:48:59.064754 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-28 00:48:59.064758 | orchestrator | Saturday 28 February 2026 00:47:37 +0000 (0:00:01.815) 0:01:14.533 ***** 2026-02-28 00:48:59.064762 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.064765 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.064769 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.064773 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.064777 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.064780 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.064784 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.064788 | orchestrator | 2026-02-28 00:48:59.064792 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064796 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:01.279) 0:01:15.812 ***** 2026-02-28 00:48:59.064799 | orchestrator | 2026-02-28 00:48:59.064803 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064807 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:00.115) 0:01:15.928 ***** 2026-02-28 00:48:59.064813 | orchestrator | 2026-02-28 00:48:59.064817 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064821 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:00.069) 0:01:15.998 ***** 2026-02-28 00:48:59.064825 | orchestrator | 2026-02-28 00:48:59.064828 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064832 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.274) 0:01:16.272 ***** 2026-02-28 00:48:59.064836 | orchestrator | 2026-02-28 00:48:59.064840 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064843 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.066) 0:01:16.338 ***** 2026-02-28 00:48:59.064847 | orchestrator | 2026-02-28 00:48:59.064851 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064855 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.064) 0:01:16.403 ***** 2026-02-28 00:48:59.064858 | orchestrator | 2026-02-28 00:48:59.064862 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-28 00:48:59.064866 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.074) 0:01:16.477 ***** 2026-02-28 00:48:59.064870 | orchestrator | 2026-02-28 00:48:59.064873 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-28 00:48:59.064880 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:00.094) 0:01:16.572 ***** 2026-02-28 00:48:59.064884 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.064887 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.064891 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.064895 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.064899 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.064902 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.064906 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.064910 | orchestrator | 2026-02-28 00:48:59.064914 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-28 00:48:59.064918 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:30.644) 0:01:47.217 ***** 2026-02-28 00:48:59.064922 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.064925 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.064929 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.064933 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.064951 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.064956 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.064960 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.064963 | orchestrator | 2026-02-28 00:48:59.064967 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-28 00:48:59.064971 | orchestrator | Saturday 28 February 2026 00:48:39 +0000 (0:00:29.761) 0:02:16.979 ***** 2026-02-28 00:48:59.064974 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:48:59.064978 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:48:59.064982 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:48:59.064986 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:48:59.064989 | orchestrator | ok: [testbed-manager] 2026-02-28 00:48:59.064993 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:48:59.064997 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:48:59.065001 | orchestrator | 2026-02-28 00:48:59.065004 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-28 00:48:59.065008 | orchestrator | Saturday 28 February 2026 00:48:42 +0000 (0:00:02.213) 0:02:19.193 ***** 2026-02-28 00:48:59.065012 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:48:59.065016 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:48:59.065019 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:48:59.065023 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:48:59.065027 | orchestrator | changed: [testbed-manager] 2026-02-28 00:48:59.065031 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:48:59.065034 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:48:59.065041 | orchestrator | 2026-02-28 00:48:59.065045 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:48:59.065049 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065055 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065059 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065063 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065067 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065071 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065075 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-28 00:48:59.065078 | orchestrator | 2026-02-28 00:48:59.065082 | orchestrator | 2026-02-28 00:48:59.065086 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:48:59.065090 | orchestrator | Saturday 28 February 2026 00:48:56 +0000 (0:00:14.405) 0:02:33.598 ***** 2026-02-28 00:48:59.065093 | orchestrator | =============================================================================== 2026-02-28 00:48:59.065097 | orchestrator | common : Restart fluentd container ------------------------------------- 30.65s 2026-02-28 00:48:59.065101 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 29.76s 2026-02-28 00:48:59.065105 | orchestrator | common : Restart cron container ---------------------------------------- 14.41s 2026-02-28 00:48:59.065109 | orchestrator | common : Copying over config.json files for services ------------------- 11.38s 2026-02-28 00:48:59.065112 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 8.70s 2026-02-28 00:48:59.065116 | orchestrator | common : Copying over cron logrotate config file ------------------------ 7.64s 2026-02-28 00:48:59.065120 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.10s 2026-02-28 00:48:59.065124 | orchestrator | common : Check common containers ---------------------------------------- 4.42s 2026-02-28 00:48:59.065128 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.98s 2026-02-28 00:48:59.065131 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.77s 2026-02-28 00:48:59.065135 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.48s 2026-02-28 00:48:59.065139 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.35s 2026-02-28 00:48:59.065142 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.31s 2026-02-28 00:48:59.065146 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.68s 2026-02-28 00:48:59.065152 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.55s 2026-02-28 00:48:59.065156 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.21s 2026-02-28 00:48:59.065162 | orchestrator | common : Creating log volume -------------------------------------------- 1.82s 2026-02-28 00:48:59.065166 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.77s 2026-02-28 00:48:59.065170 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.72s 2026-02-28 00:48:59.065174 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.65s 2026-02-28 00:48:59.065178 | orchestrator | 2026-02-28 00:48:59 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:48:59.065184 | orchestrator | 2026-02-28 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:02.086530 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:02.089616 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:02.089666 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state STARTED 2026-02-28 00:49:02.090407 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:02.090889 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:02.091614 | orchestrator | 2026-02-28 00:49:02 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:02.091639 | orchestrator | 2026-02-28 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:05.116550 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:05.117249 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:05.118285 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state STARTED 2026-02-28 00:49:05.119904 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:05.120822 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:05.121833 | orchestrator | 2026-02-28 00:49:05 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:05.121870 | orchestrator | 2026-02-28 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:08.153936 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:08.155765 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:08.156842 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state STARTED 2026-02-28 00:49:08.157254 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:08.159568 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:08.160375 | orchestrator | 2026-02-28 00:49:08 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:08.160402 | orchestrator | 2026-02-28 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:11.193948 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:11.194923 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:11.195956 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state STARTED 2026-02-28 00:49:11.196969 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:11.198644 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:11.199063 | orchestrator | 2026-02-28 00:49:11 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:11.199185 | orchestrator | 2026-02-28 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:14.252970 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:14.254002 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:14.255220 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state STARTED 2026-02-28 00:49:14.256856 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:14.258114 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:14.259429 | orchestrator | 2026-02-28 00:49:14 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:14.259451 | orchestrator | 2026-02-28 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:17.297774 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:17.298613 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:17.311164 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task 8b534aaa-affc-4654-becf-7cf24aa6873b is in state SUCCESS 2026-02-28 00:49:17.311264 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:17.311280 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:17.311292 | orchestrator | 2026-02-28 00:49:17 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:17.311305 | orchestrator | 2026-02-28 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:20.409232 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:20.411607 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:20.417788 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:20.418332 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:20.423623 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:20.426153 | orchestrator | 2026-02-28 00:49:20 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:20.426198 | orchestrator | 2026-02-28 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:23.506102 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:23.508679 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:23.511666 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:23.514353 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:23.516492 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:23.519681 | orchestrator | 2026-02-28 00:49:23 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:23.519785 | orchestrator | 2026-02-28 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:26.565025 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:26.565387 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:26.566310 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:26.567043 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:26.568546 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:26.569280 | orchestrator | 2026-02-28 00:49:26 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:26.569307 | orchestrator | 2026-02-28 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:29.628345 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:29.630463 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:29.635296 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state STARTED 2026-02-28 00:49:29.637316 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:29.641550 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:29.645800 | orchestrator | 2026-02-28 00:49:29 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:29.645833 | orchestrator | 2026-02-28 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:32.717125 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:32.719439 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:32.721403 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 8269498a-6250-4976-9875-eefbf8b068aa is in state SUCCESS 2026-02-28 00:49:32.722789 | orchestrator | 2026-02-28 00:49:32.722831 | orchestrator | 2026-02-28 00:49:32.722837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:49:32.722842 | orchestrator | 2026-02-28 00:49:32.722846 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:49:32.722850 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.494) 0:00:00.494 ***** 2026-02-28 00:49:32.722854 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:49:32.722859 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:49:32.722863 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:49:32.722866 | orchestrator | 2026-02-28 00:49:32.722870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:49:32.722874 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.737) 0:00:01.231 ***** 2026-02-28 00:49:32.722879 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-28 00:49:32.722883 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-28 00:49:32.722886 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-28 00:49:32.722890 | orchestrator | 2026-02-28 00:49:32.722894 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-28 00:49:32.722898 | orchestrator | 2026-02-28 00:49:32.722902 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-28 00:49:32.722917 | orchestrator | Saturday 28 February 2026 00:49:03 +0000 (0:00:00.521) 0:00:01.753 ***** 2026-02-28 00:49:32.722921 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:49:32.722926 | orchestrator | 2026-02-28 00:49:32.722930 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-28 00:49:32.722934 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:00.700) 0:00:02.453 ***** 2026-02-28 00:49:32.722938 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-28 00:49:32.722942 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-28 00:49:32.722946 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-28 00:49:32.722949 | orchestrator | 2026-02-28 00:49:32.722953 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-28 00:49:32.722957 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:00.948) 0:00:03.402 ***** 2026-02-28 00:49:32.722961 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-28 00:49:32.722965 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-28 00:49:32.722969 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-28 00:49:32.722972 | orchestrator | 2026-02-28 00:49:32.722976 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-28 00:49:32.722980 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:02.192) 0:00:05.594 ***** 2026-02-28 00:49:32.722984 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:32.722988 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:32.723005 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:32.723012 | orchestrator | 2026-02-28 00:49:32.723018 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-28 00:49:32.723024 | orchestrator | Saturday 28 February 2026 00:49:09 +0000 (0:00:02.158) 0:00:07.753 ***** 2026-02-28 00:49:32.723028 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:32.723032 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:32.723036 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:32.723040 | orchestrator | 2026-02-28 00:49:32.723044 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:32.723048 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:32.723053 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:32.723057 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:32.723061 | orchestrator | 2026-02-28 00:49:32.723065 | orchestrator | 2026-02-28 00:49:32.723068 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:32.723072 | orchestrator | Saturday 28 February 2026 00:49:15 +0000 (0:00:05.553) 0:00:13.306 ***** 2026-02-28 00:49:32.723076 | orchestrator | =============================================================================== 2026-02-28 00:49:32.723080 | orchestrator | memcached : Restart memcached container --------------------------------- 5.55s 2026-02-28 00:49:32.723084 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.19s 2026-02-28 00:49:32.723087 | orchestrator | memcached : Check memcached container ----------------------------------- 2.16s 2026-02-28 00:49:32.723098 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.95s 2026-02-28 00:49:32.723102 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-02-28 00:49:32.723106 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.70s 2026-02-28 00:49:32.723110 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-02-28 00:49:32.723113 | orchestrator | 2026-02-28 00:49:32.723117 | orchestrator | 2026-02-28 00:49:32.723121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:49:32.723129 | orchestrator | 2026-02-28 00:49:32.723132 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:49:32.723136 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.422) 0:00:00.422 ***** 2026-02-28 00:49:32.723140 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:49:32.723144 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:49:32.723148 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:49:32.723153 | orchestrator | 2026-02-28 00:49:32.723159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:49:32.723178 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.492) 0:00:00.914 ***** 2026-02-28 00:49:32.723186 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-28 00:49:32.723192 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-28 00:49:32.723198 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-28 00:49:32.723204 | orchestrator | 2026-02-28 00:49:32.723210 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-28 00:49:32.723216 | orchestrator | 2026-02-28 00:49:32.723222 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-28 00:49:32.723227 | orchestrator | Saturday 28 February 2026 00:49:03 +0000 (0:00:00.674) 0:00:01.589 ***** 2026-02-28 00:49:32.723233 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:49:32.723238 | orchestrator | 2026-02-28 00:49:32.723244 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-28 00:49:32.723250 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:00.850) 0:00:02.439 ***** 2026-02-28 00:49:32.723257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723364 | orchestrator | 2026-02-28 00:49:32.723371 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-28 00:49:32.723375 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:01.319) 0:00:03.759 ***** 2026-02-28 00:49:32.723379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723439 | orchestrator | 2026-02-28 00:49:32.723445 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-28 00:49:32.723449 | orchestrator | Saturday 28 February 2026 00:49:08 +0000 (0:00:02.966) 0:00:06.725 ***** 2026-02-28 00:49:32.723454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723487 | orchestrator | 2026-02-28 00:49:32.723494 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-28 00:49:32.723499 | orchestrator | Saturday 28 February 2026 00:49:11 +0000 (0:00:02.824) 0:00:09.550 ***** 2026-02-28 00:49:32.723503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-28 00:49:32.723536 | orchestrator | 2026-02-28 00:49:32.723540 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:49:32.723544 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:02.048) 0:00:11.599 ***** 2026-02-28 00:49:32.723549 | orchestrator | 2026-02-28 00:49:32.723553 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:49:32.723559 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.143) 0:00:11.742 ***** 2026-02-28 00:49:32.723564 | orchestrator | 2026-02-28 00:49:32.723568 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-28 00:49:32.723572 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.083) 0:00:11.826 ***** 2026-02-28 00:49:32.723576 | orchestrator | 2026-02-28 00:49:32.723580 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-28 00:49:32.723584 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:00.074) 0:00:11.900 ***** 2026-02-28 00:49:32.723588 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:32.723591 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:32.723595 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:32.723599 | orchestrator | 2026-02-28 00:49:32.723603 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-28 00:49:32.723607 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:04.399) 0:00:16.299 ***** 2026-02-28 00:49:32.723611 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:49:32.723614 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:49:32.723618 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:49:32.723622 | orchestrator | 2026-02-28 00:49:32.723626 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:49:32.723630 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:32.723635 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:32.723638 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:49:32.723642 | orchestrator | 2026-02-28 00:49:32.723649 | orchestrator | 2026-02-28 00:49:32.723653 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:49:32.723657 | orchestrator | Saturday 28 February 2026 00:49:29 +0000 (0:00:11.284) 0:00:27.584 ***** 2026-02-28 00:49:32.723660 | orchestrator | =============================================================================== 2026-02-28 00:49:32.723664 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.28s 2026-02-28 00:49:32.723668 | orchestrator | redis : Restart redis container ----------------------------------------- 4.40s 2026-02-28 00:49:32.723672 | orchestrator | redis : Copying over default config.json files -------------------------- 2.97s 2026-02-28 00:49:32.723676 | orchestrator | redis : Copying over redis config files --------------------------------- 2.82s 2026-02-28 00:49:32.723679 | orchestrator | redis : Check redis containers ------------------------------------------ 2.05s 2026-02-28 00:49:32.723683 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.32s 2026-02-28 00:49:32.723687 | orchestrator | redis : include_tasks --------------------------------------------------- 0.85s 2026-02-28 00:49:32.723691 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-02-28 00:49:32.723697 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-02-28 00:49:32.723701 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.30s 2026-02-28 00:49:32.724781 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:32.727752 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:32.731517 | orchestrator | 2026-02-28 00:49:32 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:32.731563 | orchestrator | 2026-02-28 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:35.783283 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:35.786064 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:35.789305 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:35.793435 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:35.797405 | orchestrator | 2026-02-28 00:49:35 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:35.797460 | orchestrator | 2026-02-28 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:38.847324 | orchestrator | 2026-02-28 00:49:38 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:38.847400 | orchestrator | 2026-02-28 00:49:38 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:38.848602 | orchestrator | 2026-02-28 00:49:38 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:38.855222 | orchestrator | 2026-02-28 00:49:38 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:38.862294 | orchestrator | 2026-02-28 00:49:38 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:38.862367 | orchestrator | 2026-02-28 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:41.920851 | orchestrator | 2026-02-28 00:49:41 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:41.921971 | orchestrator | 2026-02-28 00:49:41 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:41.922944 | orchestrator | 2026-02-28 00:49:41 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:41.936485 | orchestrator | 2026-02-28 00:49:41 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:41.939223 | orchestrator | 2026-02-28 00:49:41 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:41.939271 | orchestrator | 2026-02-28 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:45.066848 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:45.066936 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:45.067583 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:45.069033 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:45.071280 | orchestrator | 2026-02-28 00:49:45 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:45.071337 | orchestrator | 2026-02-28 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:48.139837 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:48.140233 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:48.141908 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:48.144711 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:48.147131 | orchestrator | 2026-02-28 00:49:48 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:48.147250 | orchestrator | 2026-02-28 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:51.416766 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:51.418215 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:51.420058 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:51.423722 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:51.424469 | orchestrator | 2026-02-28 00:49:51 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:51.425176 | orchestrator | 2026-02-28 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:54.581425 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:54.581526 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:54.581601 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:54.582726 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:54.583629 | orchestrator | 2026-02-28 00:49:54 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:54.583694 | orchestrator | 2026-02-28 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:49:57.623820 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:49:57.625794 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:49:57.627826 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:49:57.630262 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:49:57.631290 | orchestrator | 2026-02-28 00:49:57 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:49:57.631361 | orchestrator | 2026-02-28 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:00.689112 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:00.690725 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:00.692162 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:00.693149 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:00.694156 | orchestrator | 2026-02-28 00:50:00 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:00.694266 | orchestrator | 2026-02-28 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:03.738225 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:03.739675 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:03.741185 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:03.743311 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:03.744545 | orchestrator | 2026-02-28 00:50:03 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:03.744597 | orchestrator | 2026-02-28 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:06.910112 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:06.910172 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:06.910181 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:06.910188 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:06.910195 | orchestrator | 2026-02-28 00:50:06 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:06.910203 | orchestrator | 2026-02-28 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:09.927221 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:09.927992 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:09.928706 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:09.929821 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:09.934007 | orchestrator | 2026-02-28 00:50:09 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:09.934161 | orchestrator | 2026-02-28 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:13.004942 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:13.005009 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:13.005061 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:13.005129 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:13.005148 | orchestrator | 2026-02-28 00:50:12 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:13.005165 | orchestrator | 2026-02-28 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:16.163373 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:16.168251 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:16.176898 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:16.182072 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:16.187868 | orchestrator | 2026-02-28 00:50:16 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:16.188914 | orchestrator | 2026-02-28 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:19.313896 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:19.313984 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:19.313999 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:19.314011 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:19.314214 | orchestrator | 2026-02-28 00:50:19 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:19.314234 | orchestrator | 2026-02-28 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:22.349401 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:22.349950 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:22.351207 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:22.353664 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:22.354895 | orchestrator | 2026-02-28 00:50:22 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:22.355067 | orchestrator | 2026-02-28 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:25.397355 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:25.399703 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:25.400501 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state STARTED 2026-02-28 00:50:25.403744 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:25.405344 | orchestrator | 2026-02-28 00:50:25 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:25.405481 | orchestrator | 2026-02-28 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:28.444832 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:28.445590 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:28.446955 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:28.448845 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 5fe8f21a-3dbe-42f2-bf04-d5aff9281197 is in state SUCCESS 2026-02-28 00:50:28.451809 | orchestrator | 2026-02-28 00:50:28.451871 | orchestrator | 2026-02-28 00:50:28.451905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:50:28.451918 | orchestrator | 2026-02-28 00:50:28.451930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:50:28.451943 | orchestrator | Saturday 28 February 2026 00:49:01 +0000 (0:00:00.321) 0:00:00.321 ***** 2026-02-28 00:50:28.451954 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:50:28.451966 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:50:28.451978 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:50:28.451989 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:50:28.452000 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:50:28.452011 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:50:28.452022 | orchestrator | 2026-02-28 00:50:28.452058 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:50:28.452076 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.941) 0:00:01.263 ***** 2026-02-28 00:50:28.452155 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:28.452174 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:28.452191 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:28.452210 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:28.452228 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:28.452246 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-28 00:50:28.452264 | orchestrator | 2026-02-28 00:50:28.452283 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-28 00:50:28.452302 | orchestrator | 2026-02-28 00:50:28.452321 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-28 00:50:28.452337 | orchestrator | Saturday 28 February 2026 00:49:03 +0000 (0:00:00.940) 0:00:02.203 ***** 2026-02-28 00:50:28.452350 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:50:28.452363 | orchestrator | 2026-02-28 00:50:28.452374 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 00:50:28.452385 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:01.829) 0:00:04.032 ***** 2026-02-28 00:50:28.452397 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-28 00:50:28.452408 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-28 00:50:28.452420 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-28 00:50:28.452431 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-28 00:50:28.452441 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-28 00:50:28.452452 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-28 00:50:28.452490 | orchestrator | 2026-02-28 00:50:28.452501 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 00:50:28.452512 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:01.857) 0:00:05.890 ***** 2026-02-28 00:50:28.452523 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-28 00:50:28.452535 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-28 00:50:28.452546 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-28 00:50:28.452557 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-28 00:50:28.452568 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-28 00:50:28.452579 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-28 00:50:28.452590 | orchestrator | 2026-02-28 00:50:28.452601 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 00:50:28.452612 | orchestrator | Saturday 28 February 2026 00:49:09 +0000 (0:00:01.929) 0:00:07.820 ***** 2026-02-28 00:50:28.452623 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-28 00:50:28.452634 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:28.452646 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-28 00:50:28.452676 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:28.452687 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-28 00:50:28.452698 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:28.452709 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-28 00:50:28.452720 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:28.452731 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-28 00:50:28.452742 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:28.452753 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-28 00:50:28.452764 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:28.452775 | orchestrator | 2026-02-28 00:50:28.452786 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-28 00:50:28.452796 | orchestrator | Saturday 28 February 2026 00:49:11 +0000 (0:00:01.894) 0:00:09.715 ***** 2026-02-28 00:50:28.452807 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:28.452818 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:28.452829 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:28.452839 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:28.452850 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:28.452861 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:28.452872 | orchestrator | 2026-02-28 00:50:28.452882 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-28 00:50:28.452893 | orchestrator | Saturday 28 February 2026 00:49:12 +0000 (0:00:01.138) 0:00:10.853 ***** 2026-02-28 00:50:28.452940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.452956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.452978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.452990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453001 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453149 | orchestrator | 2026-02-28 00:50:28.453161 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-28 00:50:28.453172 | orchestrator | Saturday 28 February 2026 00:49:14 +0000 (0:00:02.055) 0:00:12.909 ***** 2026-02-28 00:50:28.453189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453366 | orchestrator | 2026-02-28 00:50:28.453382 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-28 00:50:28.453394 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:05.051) 0:00:17.961 ***** 2026-02-28 00:50:28.453405 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:50:28.453416 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:50:28.453427 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:50:28.453437 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:28.453448 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:28.453459 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:28.453470 | orchestrator | 2026-02-28 00:50:28.453481 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-28 00:50:28.453492 | orchestrator | Saturday 28 February 2026 00:49:21 +0000 (0:00:02.088) 0:00:20.049 ***** 2026-02-28 00:50:28.453503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-28 00:50:28.453684 | orchestrator | 2026-02-28 00:50:28.453695 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:28.453706 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:03.856) 0:00:23.906 ***** 2026-02-28 00:50:28.453717 | orchestrator | 2026-02-28 00:50:28.453728 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:28.453739 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:00.413) 0:00:24.319 ***** 2026-02-28 00:50:28.453749 | orchestrator | 2026-02-28 00:50:28.453760 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:28.453771 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:00.166) 0:00:24.486 ***** 2026-02-28 00:50:28.453782 | orchestrator | 2026-02-28 00:50:28.453793 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:28.453804 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:00.216) 0:00:24.702 ***** 2026-02-28 00:50:28.453814 | orchestrator | 2026-02-28 00:50:28.453826 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:28.453836 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:00.227) 0:00:24.929 ***** 2026-02-28 00:50:28.453847 | orchestrator | 2026-02-28 00:50:28.453858 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-28 00:50:28.453869 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:00.140) 0:00:25.070 ***** 2026-02-28 00:50:28.453880 | orchestrator | 2026-02-28 00:50:28.453891 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-28 00:50:28.453901 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:00.191) 0:00:25.261 ***** 2026-02-28 00:50:28.453913 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:28.453923 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:50:28.453934 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:50:28.453945 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:28.453956 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:28.453967 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:50:28.453977 | orchestrator | 2026-02-28 00:50:28.453988 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-28 00:50:28.453999 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:12.020) 0:00:37.281 ***** 2026-02-28 00:50:28.454074 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:50:28.454144 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:50:28.454158 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:50:28.454170 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:50:28.454181 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:50:28.454192 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:50:28.454203 | orchestrator | 2026-02-28 00:50:28.454214 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-28 00:50:28.454225 | orchestrator | Saturday 28 February 2026 00:49:41 +0000 (0:00:03.107) 0:00:40.389 ***** 2026-02-28 00:50:28.454236 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:28.454247 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:28.454258 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:28.454269 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:50:28.454280 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:50:28.454291 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:50:28.454302 | orchestrator | 2026-02-28 00:50:28.454313 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-28 00:50:28.454324 | orchestrator | Saturday 28 February 2026 00:49:53 +0000 (0:00:11.920) 0:00:52.310 ***** 2026-02-28 00:50:28.454335 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-28 00:50:28.454346 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-28 00:50:28.454358 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-28 00:50:28.454369 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-28 00:50:28.454380 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-28 00:50:28.454398 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-28 00:50:28.454410 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-28 00:50:28.454421 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-28 00:50:28.454432 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-28 00:50:28.454443 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-28 00:50:28.454454 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-28 00:50:28.454465 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-28 00:50:28.454476 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:28.454487 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:28.454498 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:28.454509 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:28.454520 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:28.454531 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-28 00:50:28.454542 | orchestrator | 2026-02-28 00:50:28.454553 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-28 00:50:28.454577 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:09.418) 0:01:01.729 ***** 2026-02-28 00:50:28.454589 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-28 00:50:28.454600 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-28 00:50:28.454611 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:28.454622 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:28.454633 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-28 00:50:28.454644 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:28.454654 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-28 00:50:28.454665 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-28 00:50:28.454676 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-28 00:50:28.454687 | orchestrator | 2026-02-28 00:50:28.454698 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-28 00:50:28.454709 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:03.774) 0:01:05.503 ***** 2026-02-28 00:50:28.454720 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:50:28.454731 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:50:28.454742 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:50:28.454753 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:50:28.454764 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-28 00:50:28.454775 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:50:28.454786 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:50:28.454797 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:50:28.454808 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-28 00:50:28.454818 | orchestrator | 2026-02-28 00:50:28.454829 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-28 00:50:28.454840 | orchestrator | Saturday 28 February 2026 00:50:11 +0000 (0:00:04.592) 0:01:10.095 ***** 2026-02-28 00:50:28.454851 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:50:28.454862 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:50:28.454873 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:50:28.454884 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:50:28.454895 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:50:28.454906 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:50:28.454916 | orchestrator | 2026-02-28 00:50:28.454928 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:50:28.454939 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:50:28.454951 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:50:28.454962 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:50:28.454973 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:50:28.454984 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:50:28.455008 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:50:28.455020 | orchestrator | 2026-02-28 00:50:28.455031 | orchestrator | 2026-02-28 00:50:28.455042 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:50:28.455053 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:13.581) 0:01:23.677 ***** 2026-02-28 00:50:28.455065 | orchestrator | =============================================================================== 2026-02-28 00:50:28.455083 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 25.50s 2026-02-28 00:50:28.455182 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.02s 2026-02-28 00:50:28.455198 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.43s 2026-02-28 00:50:28.455210 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.05s 2026-02-28 00:50:28.455220 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.59s 2026-02-28 00:50:28.455231 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.86s 2026-02-28 00:50:28.455242 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.76s 2026-02-28 00:50:28.455253 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.11s 2026-02-28 00:50:28.455264 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.09s 2026-02-28 00:50:28.455275 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.06s 2026-02-28 00:50:28.455285 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.93s 2026-02-28 00:50:28.455296 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.89s 2026-02-28 00:50:28.455307 | orchestrator | module-load : Load modules ---------------------------------------------- 1.86s 2026-02-28 00:50:28.455318 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.83s 2026-02-28 00:50:28.455328 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.35s 2026-02-28 00:50:28.455339 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.14s 2026-02-28 00:50:28.455350 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2026-02-28 00:50:28.455361 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-02-28 00:50:28.455372 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:28.455383 | orchestrator | 2026-02-28 00:50:28 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:28.455394 | orchestrator | 2026-02-28 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:31.489524 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:31.490173 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:31.492514 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:31.493533 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:31.495332 | orchestrator | 2026-02-28 00:50:31 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:31.495378 | orchestrator | 2026-02-28 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:34.537969 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:34.539616 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:34.542436 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:34.544292 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:34.546375 | orchestrator | 2026-02-28 00:50:34 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:34.546421 | orchestrator | 2026-02-28 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:37.592872 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:37.592998 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:37.594246 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:37.595081 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:37.596091 | orchestrator | 2026-02-28 00:50:37 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:37.596196 | orchestrator | 2026-02-28 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:40.645212 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:40.648891 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:40.649802 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:40.651757 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:40.654524 | orchestrator | 2026-02-28 00:50:40 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:40.654603 | orchestrator | 2026-02-28 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:43.701358 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:43.704729 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:43.706187 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:43.707390 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:43.708343 | orchestrator | 2026-02-28 00:50:43 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:43.708389 | orchestrator | 2026-02-28 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:46.763317 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:46.764324 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:46.765677 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:46.767994 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:46.770494 | orchestrator | 2026-02-28 00:50:46 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:46.770786 | orchestrator | 2026-02-28 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:49.807844 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:49.807900 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:49.807908 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:49.808817 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:49.809788 | orchestrator | 2026-02-28 00:50:49 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:49.809804 | orchestrator | 2026-02-28 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:52.862750 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:52.863650 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:52.864941 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:52.866122 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:52.867261 | orchestrator | 2026-02-28 00:50:52 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:52.867297 | orchestrator | 2026-02-28 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:55.910440 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:55.912853 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:55.914960 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:55.916970 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:55.920994 | orchestrator | 2026-02-28 00:50:55 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:55.921040 | orchestrator | 2026-02-28 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:50:58.978635 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:50:58.981062 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:50:58.983311 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:50:58.985678 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:50:58.989064 | orchestrator | 2026-02-28 00:50:58 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:50:58.989118 | orchestrator | 2026-02-28 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:02.047758 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:02.050485 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:02.051871 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:02.054911 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:02.056690 | orchestrator | 2026-02-28 00:51:02 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:02.056744 | orchestrator | 2026-02-28 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:05.256212 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:05.264912 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:05.271548 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:05.278789 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:05.282330 | orchestrator | 2026-02-28 00:51:05 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:05.282398 | orchestrator | 2026-02-28 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:08.361453 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:08.361544 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:08.362625 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:08.364547 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:08.367269 | orchestrator | 2026-02-28 00:51:08 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:08.367320 | orchestrator | 2026-02-28 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:11.413219 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:11.415480 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:11.417073 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:11.420407 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:11.421064 | orchestrator | 2026-02-28 00:51:11 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:11.421351 | orchestrator | 2026-02-28 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:14.461468 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:14.462789 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:14.467255 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:14.468965 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:14.471011 | orchestrator | 2026-02-28 00:51:14 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:14.471079 | orchestrator | 2026-02-28 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:17.574470 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:17.574609 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:17.581573 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:17.606878 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:17.615527 | orchestrator | 2026-02-28 00:51:17 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:17.619513 | orchestrator | 2026-02-28 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:20.891389 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:20.891494 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:20.891503 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:20.891510 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:20.891516 | orchestrator | 2026-02-28 00:51:20 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:20.891522 | orchestrator | 2026-02-28 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:23.985417 | orchestrator | 2026-02-28 00:51:23 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:23.987333 | orchestrator | 2026-02-28 00:51:23 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:23.989914 | orchestrator | 2026-02-28 00:51:23 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:24.010645 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:24.016724 | orchestrator | 2026-02-28 00:51:24 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:24.020272 | orchestrator | 2026-02-28 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:27.115754 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:27.116420 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:27.117891 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:27.118999 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:27.121143 | orchestrator | 2026-02-28 00:51:27 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:27.121184 | orchestrator | 2026-02-28 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:30.177281 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:30.182478 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:30.187015 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:30.197991 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:30.204945 | orchestrator | 2026-02-28 00:51:30 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:30.205025 | orchestrator | 2026-02-28 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:33.253367 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:33.256002 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:33.256668 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:33.258933 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:33.261035 | orchestrator | 2026-02-28 00:51:33 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:33.261093 | orchestrator | 2026-02-28 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:36.509184 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:36.509367 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:36.509389 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:36.509437 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:36.509452 | orchestrator | 2026-02-28 00:51:36 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:36.509466 | orchestrator | 2026-02-28 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:39.724692 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:39.724796 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:39.724810 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state STARTED 2026-02-28 00:51:39.724822 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:39.724832 | orchestrator | 2026-02-28 00:51:39 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:39.724844 | orchestrator | 2026-02-28 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:42.748967 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:42.749560 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:42.752720 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 9b186cef-8799-4f46-8c8f-19ad35afba75 is in state SUCCESS 2026-02-28 00:51:42.754310 | orchestrator | 2026-02-28 00:51:42.754360 | orchestrator | 2026-02-28 00:51:42.754373 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-28 00:51:42.754384 | orchestrator | 2026-02-28 00:51:42.754395 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-28 00:51:42.754406 | orchestrator | Saturday 28 February 2026 00:46:23 +0000 (0:00:00.200) 0:00:00.200 ***** 2026-02-28 00:51:42.754416 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:42.754427 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:42.754438 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:42.754448 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.754458 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.754468 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.754477 | orchestrator | 2026-02-28 00:51:42.754488 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-28 00:51:42.754498 | orchestrator | Saturday 28 February 2026 00:46:24 +0000 (0:00:00.760) 0:00:00.961 ***** 2026-02-28 00:51:42.754508 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.754519 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.754529 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.754539 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.754549 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.754558 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.754568 | orchestrator | 2026-02-28 00:51:42.754578 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-28 00:51:42.754588 | orchestrator | Saturday 28 February 2026 00:46:25 +0000 (0:00:00.599) 0:00:01.560 ***** 2026-02-28 00:51:42.754625 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.754635 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.754645 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.754655 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.754666 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.754676 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.754686 | orchestrator | 2026-02-28 00:51:42.754696 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-28 00:51:42.754705 | orchestrator | Saturday 28 February 2026 00:46:25 +0000 (0:00:00.672) 0:00:02.233 ***** 2026-02-28 00:51:42.754715 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.754725 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.754735 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.754744 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.754754 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.754764 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.754774 | orchestrator | 2026-02-28 00:51:42.754784 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-28 00:51:42.754794 | orchestrator | Saturday 28 February 2026 00:46:27 +0000 (0:00:01.919) 0:00:04.153 ***** 2026-02-28 00:51:42.754804 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.754813 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.754823 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.754833 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.754843 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.754853 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.754865 | orchestrator | 2026-02-28 00:51:42.754876 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-28 00:51:42.754888 | orchestrator | Saturday 28 February 2026 00:46:28 +0000 (0:00:01.100) 0:00:05.254 ***** 2026-02-28 00:51:42.754899 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.754910 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.754922 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.754933 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.754944 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.754955 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.754965 | orchestrator | 2026-02-28 00:51:42.754988 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-28 00:51:42.754998 | orchestrator | Saturday 28 February 2026 00:46:30 +0000 (0:00:01.311) 0:00:06.565 ***** 2026-02-28 00:51:42.755008 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.755018 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.755028 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.755038 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.755047 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.755057 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.755067 | orchestrator | 2026-02-28 00:51:42.755077 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-28 00:51:42.755087 | orchestrator | Saturday 28 February 2026 00:46:31 +0000 (0:00:01.027) 0:00:07.593 ***** 2026-02-28 00:51:42.755097 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.755106 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.755116 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.755126 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.755136 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.755145 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.755155 | orchestrator | 2026-02-28 00:51:42.755165 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-28 00:51:42.755175 | orchestrator | Saturday 28 February 2026 00:46:32 +0000 (0:00:00.788) 0:00:08.381 ***** 2026-02-28 00:51:42.755185 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:42.755195 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:42.755212 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:42.755255 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:42.755265 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.755275 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:42.755285 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:42.755295 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.755305 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:42.755315 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:42.755341 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.755357 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:42.755381 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:42.755397 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.755413 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.755428 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 00:51:42.755443 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 00:51:42.755459 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.755476 | orchestrator | 2026-02-28 00:51:42.755493 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-28 00:51:42.755510 | orchestrator | Saturday 28 February 2026 00:46:33 +0000 (0:00:00.948) 0:00:09.330 ***** 2026-02-28 00:51:42.755527 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.755544 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.755561 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.755576 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.755592 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.755601 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.755611 | orchestrator | 2026-02-28 00:51:42.755621 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-28 00:51:42.755632 | orchestrator | Saturday 28 February 2026 00:46:34 +0000 (0:00:01.297) 0:00:10.627 ***** 2026-02-28 00:51:42.755642 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:42.755652 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:42.755661 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:42.755671 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.755681 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.755694 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.755711 | orchestrator | 2026-02-28 00:51:42.755727 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-28 00:51:42.755743 | orchestrator | Saturday 28 February 2026 00:46:35 +0000 (0:00:00.914) 0:00:11.542 ***** 2026-02-28 00:51:42.755759 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.755775 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.755842 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.755863 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.755879 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.755896 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.755913 | orchestrator | 2026-02-28 00:51:42.755930 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-28 00:51:42.755947 | orchestrator | Saturday 28 February 2026 00:46:40 +0000 (0:00:05.161) 0:00:16.704 ***** 2026-02-28 00:51:42.755963 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.755980 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.755997 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.756014 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756031 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756058 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756068 | orchestrator | 2026-02-28 00:51:42.756078 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-28 00:51:42.756088 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:01.671) 0:00:18.375 ***** 2026-02-28 00:51:42.756098 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.756107 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.756117 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.756127 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756137 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756146 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756156 | orchestrator | 2026-02-28 00:51:42.756173 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-28 00:51:42.756185 | orchestrator | Saturday 28 February 2026 00:46:45 +0000 (0:00:03.075) 0:00:21.451 ***** 2026-02-28 00:51:42.756195 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.756205 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.756235 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.756247 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756257 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756266 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756276 | orchestrator | 2026-02-28 00:51:42.756286 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-28 00:51:42.756296 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:02.001) 0:00:23.453 ***** 2026-02-28 00:51:42.756306 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-28 00:51:42.756316 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-28 00:51:42.756326 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.756336 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-28 00:51:42.756346 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-28 00:51:42.756355 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.756365 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-28 00:51:42.756375 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-28 00:51:42.756384 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.756394 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-28 00:51:42.756404 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-28 00:51:42.756414 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756423 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-28 00:51:42.756433 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-28 00:51:42.756443 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756453 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-28 00:51:42.756462 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-28 00:51:42.756472 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756482 | orchestrator | 2026-02-28 00:51:42.756492 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-28 00:51:42.756512 | orchestrator | Saturday 28 February 2026 00:46:51 +0000 (0:00:04.137) 0:00:27.590 ***** 2026-02-28 00:51:42.756523 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.756533 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.756542 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.756552 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756562 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756571 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756581 | orchestrator | 2026-02-28 00:51:42.756591 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-28 00:51:42.756601 | orchestrator | Saturday 28 February 2026 00:46:53 +0000 (0:00:02.235) 0:00:29.825 ***** 2026-02-28 00:51:42.756611 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.756627 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.756637 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.756646 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756656 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756666 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756676 | orchestrator | 2026-02-28 00:51:42.756685 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-28 00:51:42.756695 | orchestrator | 2026-02-28 00:51:42.756705 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-28 00:51:42.756715 | orchestrator | Saturday 28 February 2026 00:46:56 +0000 (0:00:02.542) 0:00:32.367 ***** 2026-02-28 00:51:42.756725 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.756735 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.756745 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.756755 | orchestrator | 2026-02-28 00:51:42.756765 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-28 00:51:42.756775 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:01.870) 0:00:34.238 ***** 2026-02-28 00:51:42.756785 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.756795 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.756804 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.756814 | orchestrator | 2026-02-28 00:51:42.756824 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-28 00:51:42.756834 | orchestrator | Saturday 28 February 2026 00:46:59 +0000 (0:00:01.259) 0:00:35.497 ***** 2026-02-28 00:51:42.756844 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.756854 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.756863 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.756873 | orchestrator | 2026-02-28 00:51:42.756883 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-28 00:51:42.756893 | orchestrator | Saturday 28 February 2026 00:47:00 +0000 (0:00:01.201) 0:00:36.699 ***** 2026-02-28 00:51:42.756903 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.756912 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.756922 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.756932 | orchestrator | 2026-02-28 00:51:42.756942 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-28 00:51:42.756952 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:00.801) 0:00:37.501 ***** 2026-02-28 00:51:42.756962 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.756972 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.756981 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.756991 | orchestrator | 2026-02-28 00:51:42.757001 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-28 00:51:42.757011 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:00.332) 0:00:37.833 ***** 2026-02-28 00:51:42.757021 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.757031 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.757041 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.757050 | orchestrator | 2026-02-28 00:51:42.757065 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-28 00:51:42.757075 | orchestrator | Saturday 28 February 2026 00:47:02 +0000 (0:00:01.236) 0:00:39.069 ***** 2026-02-28 00:51:42.757085 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.757095 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.757105 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.757115 | orchestrator | 2026-02-28 00:51:42.757125 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-28 00:51:42.757134 | orchestrator | Saturday 28 February 2026 00:47:05 +0000 (0:00:03.010) 0:00:42.079 ***** 2026-02-28 00:51:42.757144 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:51:42.757154 | orchestrator | 2026-02-28 00:51:42.757164 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-28 00:51:42.757180 | orchestrator | Saturday 28 February 2026 00:47:07 +0000 (0:00:01.788) 0:00:43.867 ***** 2026-02-28 00:51:42.757190 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.757200 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.757210 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.757386 | orchestrator | 2026-02-28 00:51:42.757407 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-28 00:51:42.757418 | orchestrator | Saturday 28 February 2026 00:47:11 +0000 (0:00:03.723) 0:00:47.591 ***** 2026-02-28 00:51:42.757428 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.757438 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.757448 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.757457 | orchestrator | 2026-02-28 00:51:42.757467 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-28 00:51:42.757477 | orchestrator | Saturday 28 February 2026 00:47:12 +0000 (0:00:01.483) 0:00:49.075 ***** 2026-02-28 00:51:42.757487 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.757497 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.757507 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.757517 | orchestrator | 2026-02-28 00:51:42.757527 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-28 00:51:42.757536 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:01.408) 0:00:50.483 ***** 2026-02-28 00:51:42.757546 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.757556 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.757566 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.757575 | orchestrator | 2026-02-28 00:51:42.757585 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-28 00:51:42.757607 | orchestrator | Saturday 28 February 2026 00:47:16 +0000 (0:00:02.599) 0:00:53.083 ***** 2026-02-28 00:51:42.757618 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.757628 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.757638 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.757648 | orchestrator | 2026-02-28 00:51:42.757657 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-28 00:51:42.757668 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:00.826) 0:00:53.910 ***** 2026-02-28 00:51:42.757677 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.757687 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.757697 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.757707 | orchestrator | 2026-02-28 00:51:42.757717 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-28 00:51:42.757727 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:00.510) 0:00:54.420 ***** 2026-02-28 00:51:42.757737 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.757747 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.757757 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.757766 | orchestrator | 2026-02-28 00:51:42.757777 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-28 00:51:42.757787 | orchestrator | Saturday 28 February 2026 00:47:20 +0000 (0:00:02.372) 0:00:56.793 ***** 2026-02-28 00:51:42.757797 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.757807 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.757817 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.757826 | orchestrator | 2026-02-28 00:51:42.757836 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-28 00:51:42.757846 | orchestrator | Saturday 28 February 2026 00:47:23 +0000 (0:00:02.780) 0:00:59.573 ***** 2026-02-28 00:51:42.757856 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.757866 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.757876 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.757886 | orchestrator | 2026-02-28 00:51:42.757896 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-28 00:51:42.757907 | orchestrator | Saturday 28 February 2026 00:47:24 +0000 (0:00:00.731) 0:01:00.305 ***** 2026-02-28 00:51:42.757928 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:51:42.757939 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:51:42.757949 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-28 00:51:42.757959 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:51:42.757969 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:51:42.757979 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-28 00:51:42.757996 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:51:42.758006 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:51:42.758064 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-28 00:51:42.758076 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:51:42.758086 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:51:42.758096 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-28 00:51:42.758106 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-28 00:51:42.758116 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-28 00:51:42.758126 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-28 00:51:42.758136 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.758146 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.758156 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.758165 | orchestrator | 2026-02-28 00:51:42.758175 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-28 00:51:42.758185 | orchestrator | Saturday 28 February 2026 00:48:18 +0000 (0:00:54.705) 0:01:55.010 ***** 2026-02-28 00:51:42.758195 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.758204 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.758214 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.758246 | orchestrator | 2026-02-28 00:51:42.758256 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-28 00:51:42.758272 | orchestrator | Saturday 28 February 2026 00:48:19 +0000 (0:00:00.484) 0:01:55.495 ***** 2026-02-28 00:51:42.758282 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758292 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758302 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758312 | orchestrator | 2026-02-28 00:51:42.758322 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-28 00:51:42.758332 | orchestrator | Saturday 28 February 2026 00:48:20 +0000 (0:00:01.251) 0:01:56.747 ***** 2026-02-28 00:51:42.758342 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758365 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758392 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758403 | orchestrator | 2026-02-28 00:51:42.758413 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-28 00:51:42.758423 | orchestrator | Saturday 28 February 2026 00:48:22 +0000 (0:00:01.803) 0:01:58.550 ***** 2026-02-28 00:51:42.758432 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758442 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758452 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758462 | orchestrator | 2026-02-28 00:51:42.758472 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-28 00:51:42.758481 | orchestrator | Saturday 28 February 2026 00:48:48 +0000 (0:00:25.868) 0:02:24.419 ***** 2026-02-28 00:51:42.758491 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.758501 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.758511 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.758520 | orchestrator | 2026-02-28 00:51:42.758530 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-28 00:51:42.758540 | orchestrator | Saturday 28 February 2026 00:48:49 +0000 (0:00:01.144) 0:02:25.564 ***** 2026-02-28 00:51:42.758550 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.758560 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.758570 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.758579 | orchestrator | 2026-02-28 00:51:42.758589 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-28 00:51:42.758599 | orchestrator | Saturday 28 February 2026 00:48:50 +0000 (0:00:00.789) 0:02:26.353 ***** 2026-02-28 00:51:42.758609 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758620 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758630 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758639 | orchestrator | 2026-02-28 00:51:42.758649 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-28 00:51:42.758659 | orchestrator | Saturday 28 February 2026 00:48:50 +0000 (0:00:00.828) 0:02:27.182 ***** 2026-02-28 00:51:42.758669 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.758679 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.758688 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.758698 | orchestrator | 2026-02-28 00:51:42.758708 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-28 00:51:42.758717 | orchestrator | Saturday 28 February 2026 00:48:51 +0000 (0:00:00.839) 0:02:28.022 ***** 2026-02-28 00:51:42.758727 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.758737 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.758746 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.758756 | orchestrator | 2026-02-28 00:51:42.758766 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-28 00:51:42.758776 | orchestrator | Saturday 28 February 2026 00:48:52 +0000 (0:00:00.294) 0:02:28.317 ***** 2026-02-28 00:51:42.758786 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758795 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758805 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758815 | orchestrator | 2026-02-28 00:51:42.758824 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-28 00:51:42.758839 | orchestrator | Saturday 28 February 2026 00:48:52 +0000 (0:00:00.588) 0:02:28.905 ***** 2026-02-28 00:51:42.758849 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758859 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758868 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758878 | orchestrator | 2026-02-28 00:51:42.758888 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-28 00:51:42.758898 | orchestrator | Saturday 28 February 2026 00:48:53 +0000 (0:00:00.657) 0:02:29.562 ***** 2026-02-28 00:51:42.758968 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.758978 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.758988 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.758998 | orchestrator | 2026-02-28 00:51:42.759015 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-28 00:51:42.759026 | orchestrator | Saturday 28 February 2026 00:48:54 +0000 (0:00:00.966) 0:02:30.528 ***** 2026-02-28 00:51:42.759035 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:51:42.759045 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:51:42.759055 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:51:42.759065 | orchestrator | 2026-02-28 00:51:42.759075 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-28 00:51:42.759085 | orchestrator | Saturday 28 February 2026 00:48:54 +0000 (0:00:00.717) 0:02:31.246 ***** 2026-02-28 00:51:42.759095 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.759104 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.759114 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.759124 | orchestrator | 2026-02-28 00:51:42.759133 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-28 00:51:42.759143 | orchestrator | Saturday 28 February 2026 00:48:55 +0000 (0:00:00.286) 0:02:31.532 ***** 2026-02-28 00:51:42.759153 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.759163 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.759172 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.759182 | orchestrator | 2026-02-28 00:51:42.759192 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-28 00:51:42.759202 | orchestrator | Saturday 28 February 2026 00:48:55 +0000 (0:00:00.292) 0:02:31.825 ***** 2026-02-28 00:51:42.759212 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.759240 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.759250 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.759260 | orchestrator | 2026-02-28 00:51:42.759270 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-28 00:51:42.759280 | orchestrator | Saturday 28 February 2026 00:48:56 +0000 (0:00:00.723) 0:02:32.548 ***** 2026-02-28 00:51:42.759290 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.759307 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.759317 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.759327 | orchestrator | 2026-02-28 00:51:42.759337 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-28 00:51:42.759347 | orchestrator | Saturday 28 February 2026 00:48:56 +0000 (0:00:00.542) 0:02:33.091 ***** 2026-02-28 00:51:42.759358 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:51:42.759368 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:51:42.759378 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-28 00:51:42.759388 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:51:42.759398 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:51:42.759408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-28 00:51:42.759418 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:51:42.759428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:51:42.759438 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-28 00:51:42.759448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:51:42.759458 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-28 00:51:42.759468 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:51:42.759478 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:51:42.759502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-28 00:51:42.759512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:51:42.759522 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:51:42.759533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-28 00:51:42.759542 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:51:42.759552 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-28 00:51:42.759562 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-28 00:51:42.759572 | orchestrator | 2026-02-28 00:51:42.759582 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-28 00:51:42.759592 | orchestrator | 2026-02-28 00:51:42.759607 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-28 00:51:42.759618 | orchestrator | Saturday 28 February 2026 00:48:59 +0000 (0:00:02.704) 0:02:35.795 ***** 2026-02-28 00:51:42.759628 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:42.759638 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:42.759648 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:42.759658 | orchestrator | 2026-02-28 00:51:42.759668 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-28 00:51:42.759678 | orchestrator | Saturday 28 February 2026 00:48:59 +0000 (0:00:00.430) 0:02:36.227 ***** 2026-02-28 00:51:42.759688 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:42.759698 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:42.759708 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:42.759718 | orchestrator | 2026-02-28 00:51:42.759728 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-28 00:51:42.759738 | orchestrator | Saturday 28 February 2026 00:49:00 +0000 (0:00:00.658) 0:02:36.885 ***** 2026-02-28 00:51:42.759749 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:42.759759 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:42.759769 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:42.759779 | orchestrator | 2026-02-28 00:51:42.759789 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-28 00:51:42.759799 | orchestrator | Saturday 28 February 2026 00:49:00 +0000 (0:00:00.297) 0:02:37.182 ***** 2026-02-28 00:51:42.759809 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:51:42.759819 | orchestrator | 2026-02-28 00:51:42.759829 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-28 00:51:42.759839 | orchestrator | Saturday 28 February 2026 00:49:01 +0000 (0:00:00.563) 0:02:37.746 ***** 2026-02-28 00:51:42.759849 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.759859 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.759869 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.759879 | orchestrator | 2026-02-28 00:51:42.759889 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-28 00:51:42.759899 | orchestrator | Saturday 28 February 2026 00:49:01 +0000 (0:00:00.266) 0:02:38.013 ***** 2026-02-28 00:51:42.759909 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.759920 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.759930 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.759940 | orchestrator | 2026-02-28 00:51:42.759950 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-28 00:51:42.759965 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.350) 0:02:38.364 ***** 2026-02-28 00:51:42.759976 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.759986 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.760002 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.760012 | orchestrator | 2026-02-28 00:51:42.760022 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-28 00:51:42.760032 | orchestrator | Saturday 28 February 2026 00:49:02 +0000 (0:00:00.317) 0:02:38.681 ***** 2026-02-28 00:51:42.760042 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.760052 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.760062 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.760071 | orchestrator | 2026-02-28 00:51:42.760081 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-28 00:51:42.760092 | orchestrator | Saturday 28 February 2026 00:49:03 +0000 (0:00:00.868) 0:02:39.550 ***** 2026-02-28 00:51:42.760102 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.760112 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.760121 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.760131 | orchestrator | 2026-02-28 00:51:42.760141 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-28 00:51:42.760152 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:00.962) 0:02:40.512 ***** 2026-02-28 00:51:42.760162 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.760171 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.760181 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.760191 | orchestrator | 2026-02-28 00:51:42.760202 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-28 00:51:42.760212 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:01.187) 0:02:41.699 ***** 2026-02-28 00:51:42.760246 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:51:42.760256 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:51:42.760266 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:51:42.760276 | orchestrator | 2026-02-28 00:51:42.760286 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-28 00:51:42.760296 | orchestrator | 2026-02-28 00:51:42.760306 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-28 00:51:42.760316 | orchestrator | Saturday 28 February 2026 00:49:17 +0000 (0:00:11.720) 0:02:53.420 ***** 2026-02-28 00:51:42.760326 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.760336 | orchestrator | 2026-02-28 00:51:42.760346 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-28 00:51:42.760355 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:01.146) 0:02:54.567 ***** 2026-02-28 00:51:42.760366 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760376 | orchestrator | 2026-02-28 00:51:42.760386 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:51:42.760396 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.787) 0:02:55.355 ***** 2026-02-28 00:51:42.760406 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:51:42.760416 | orchestrator | 2026-02-28 00:51:42.760426 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:51:42.760436 | orchestrator | Saturday 28 February 2026 00:49:19 +0000 (0:00:00.633) 0:02:55.989 ***** 2026-02-28 00:51:42.760447 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760456 | orchestrator | 2026-02-28 00:51:42.760466 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-28 00:51:42.760476 | orchestrator | Saturday 28 February 2026 00:49:20 +0000 (0:00:01.031) 0:02:57.020 ***** 2026-02-28 00:51:42.760487 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760496 | orchestrator | 2026-02-28 00:51:42.760512 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-28 00:51:42.760522 | orchestrator | Saturday 28 February 2026 00:49:21 +0000 (0:00:00.947) 0:02:57.968 ***** 2026-02-28 00:51:42.760532 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:51:42.760542 | orchestrator | 2026-02-28 00:51:42.760552 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-28 00:51:42.760568 | orchestrator | Saturday 28 February 2026 00:49:23 +0000 (0:00:02.261) 0:03:00.230 ***** 2026-02-28 00:51:42.760578 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:51:42.760588 | orchestrator | 2026-02-28 00:51:42.760598 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-28 00:51:42.760608 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:01.044) 0:03:01.274 ***** 2026-02-28 00:51:42.760618 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760628 | orchestrator | 2026-02-28 00:51:42.760638 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-28 00:51:42.760648 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:00.841) 0:03:02.115 ***** 2026-02-28 00:51:42.760658 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760668 | orchestrator | 2026-02-28 00:51:42.760678 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-28 00:51:42.760688 | orchestrator | 2026-02-28 00:51:42.760698 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-28 00:51:42.760708 | orchestrator | Saturday 28 February 2026 00:49:26 +0000 (0:00:00.919) 0:03:03.035 ***** 2026-02-28 00:51:42.760718 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.760728 | orchestrator | 2026-02-28 00:51:42.760738 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-28 00:51:42.760748 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:00.281) 0:03:03.316 ***** 2026-02-28 00:51:42.760758 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:51:42.760768 | orchestrator | 2026-02-28 00:51:42.760778 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-28 00:51:42.760788 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:00.260) 0:03:03.576 ***** 2026-02-28 00:51:42.760798 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.760808 | orchestrator | 2026-02-28 00:51:42.760818 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-28 00:51:42.760828 | orchestrator | Saturday 28 February 2026 00:49:28 +0000 (0:00:01.070) 0:03:04.646 ***** 2026-02-28 00:51:42.760843 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.760854 | orchestrator | 2026-02-28 00:51:42.760864 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-28 00:51:42.760873 | orchestrator | Saturday 28 February 2026 00:49:30 +0000 (0:00:02.073) 0:03:06.719 ***** 2026-02-28 00:51:42.760883 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760893 | orchestrator | 2026-02-28 00:51:42.760903 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-28 00:51:42.760913 | orchestrator | Saturday 28 February 2026 00:49:31 +0000 (0:00:01.103) 0:03:07.823 ***** 2026-02-28 00:51:42.760922 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.760932 | orchestrator | 2026-02-28 00:51:42.760942 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-28 00:51:42.760952 | orchestrator | Saturday 28 February 2026 00:49:32 +0000 (0:00:00.530) 0:03:08.353 ***** 2026-02-28 00:51:42.760962 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.760972 | orchestrator | 2026-02-28 00:51:42.760982 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-28 00:51:42.760991 | orchestrator | Saturday 28 February 2026 00:49:43 +0000 (0:00:11.626) 0:03:19.979 ***** 2026-02-28 00:51:42.761001 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.761011 | orchestrator | 2026-02-28 00:51:42.761021 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-28 00:51:42.761031 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:18.445) 0:03:38.425 ***** 2026-02-28 00:51:42.761040 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.761050 | orchestrator | 2026-02-28 00:51:42.761060 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-28 00:51:42.761070 | orchestrator | 2026-02-28 00:51:42.761080 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-28 00:51:42.761097 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:00.647) 0:03:39.073 ***** 2026-02-28 00:51:42.761107 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.761117 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.761127 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.761137 | orchestrator | 2026-02-28 00:51:42.761147 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-28 00:51:42.761157 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:00.377) 0:03:39.451 ***** 2026-02-28 00:51:42.761167 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.761177 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.761187 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.761197 | orchestrator | 2026-02-28 00:51:42.761207 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-28 00:51:42.761279 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:00.359) 0:03:39.811 ***** 2026-02-28 00:51:42.761292 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:51:42.761302 | orchestrator | 2026-02-28 00:51:42.761312 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-28 00:51:42.761323 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:00.938) 0:03:40.749 ***** 2026-02-28 00:51:42.761333 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.761343 | orchestrator | 2026-02-28 00:51:42.761353 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-28 00:51:42.761363 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:01.097) 0:03:41.847 ***** 2026-02-28 00:51:42.761373 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.761383 | orchestrator | 2026-02-28 00:51:42.761394 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-28 00:51:42.761404 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:01.712) 0:03:43.559 ***** 2026-02-28 00:51:42.761414 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.761424 | orchestrator | 2026-02-28 00:51:42.761434 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-28 00:51:42.761444 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:00.314) 0:03:43.874 ***** 2026-02-28 00:51:42.761454 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.761464 | orchestrator | 2026-02-28 00:51:42.761473 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-28 00:51:42.762300 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:01.486) 0:03:45.360 ***** 2026-02-28 00:51:42.762334 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.762342 | orchestrator | 2026-02-28 00:51:42.762351 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-28 00:51:42.762360 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:00.135) 0:03:45.496 ***** 2026-02-28 00:51:42.762368 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.762376 | orchestrator | 2026-02-28 00:51:42.762384 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-28 00:51:42.762392 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:00.140) 0:03:45.636 ***** 2026-02-28 00:51:42.762400 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.762408 | orchestrator | 2026-02-28 00:51:42.762416 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-28 00:51:42.762424 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:00.166) 0:03:45.803 ***** 2026-02-28 00:51:42.762432 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.762440 | orchestrator | 2026-02-28 00:51:42.762448 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-28 00:51:42.762456 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:00.166) 0:03:45.969 ***** 2026-02-28 00:51:42.762464 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.762472 | orchestrator | 2026-02-28 00:51:42.762489 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-28 00:51:42.762503 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:07.309) 0:03:53.279 ***** 2026-02-28 00:51:42.762511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-28 00:51:42.762530 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-28 00:51:42.762540 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-28 00:51:42.762548 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-28 00:51:42.762556 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-28 00:51:42.762564 | orchestrator | 2026-02-28 00:51:42.762572 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-28 00:51:42.762580 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:44.279) 0:04:37.559 ***** 2026-02-28 00:51:42.762588 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.762596 | orchestrator | 2026-02-28 00:51:42.762604 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-28 00:51:42.762612 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:01.884) 0:04:39.443 ***** 2026-02-28 00:51:42.762620 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.762628 | orchestrator | 2026-02-28 00:51:42.762636 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-28 00:51:42.762644 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:03.090) 0:04:42.534 ***** 2026-02-28 00:51:42.762652 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:51:42.762660 | orchestrator | 2026-02-28 00:51:42.762668 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-28 00:51:42.762676 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:01.426) 0:04:43.961 ***** 2026-02-28 00:51:42.762684 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.762692 | orchestrator | 2026-02-28 00:51:42.762700 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-28 00:51:42.762708 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:00.210) 0:04:44.171 ***** 2026-02-28 00:51:42.762716 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-28 00:51:42.762724 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-28 00:51:42.762732 | orchestrator | 2026-02-28 00:51:42.762740 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-28 00:51:42.762749 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:02.269) 0:04:46.441 ***** 2026-02-28 00:51:42.762757 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.762765 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.762773 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.762783 | orchestrator | 2026-02-28 00:51:42.762792 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-28 00:51:42.762801 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:00.521) 0:04:46.962 ***** 2026-02-28 00:51:42.762810 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.762819 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.762828 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.762837 | orchestrator | 2026-02-28 00:51:42.762846 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-28 00:51:42.762855 | orchestrator | 2026-02-28 00:51:42.762864 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-28 00:51:42.762873 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:01.596) 0:04:48.558 ***** 2026-02-28 00:51:42.762882 | orchestrator | ok: [testbed-manager] 2026-02-28 00:51:42.762891 | orchestrator | 2026-02-28 00:51:42.762901 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-28 00:51:42.762910 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:00.165) 0:04:48.724 ***** 2026-02-28 00:51:42.762925 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-28 00:51:42.762933 | orchestrator | 2026-02-28 00:51:42.762943 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-28 00:51:42.762951 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:00.239) 0:04:48.964 ***** 2026-02-28 00:51:42.762961 | orchestrator | changed: [testbed-manager] 2026-02-28 00:51:42.762969 | orchestrator | 2026-02-28 00:51:42.762979 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-28 00:51:42.762987 | orchestrator | 2026-02-28 00:51:42.762997 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-28 00:51:42.763006 | orchestrator | Saturday 28 February 2026 00:51:18 +0000 (0:00:06.278) 0:04:55.243 ***** 2026-02-28 00:51:42.763015 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:51:42.763024 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:51:42.763033 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:51:42.763042 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:51:42.763051 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:51:42.763060 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:51:42.763070 | orchestrator | 2026-02-28 00:51:42.763084 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-28 00:51:42.763097 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:01.621) 0:04:56.864 ***** 2026-02-28 00:51:42.763111 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:51:42.763124 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:51:42.763137 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:51:42.763150 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-28 00:51:42.763163 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:51:42.763175 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-28 00:51:42.763197 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:51:42.763211 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:51:42.763255 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:51:42.763269 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-28 00:51:42.763283 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:51:42.763291 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-28 00:51:42.763299 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:51:42.763307 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:51:42.763315 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:51:42.763323 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-28 00:51:42.763331 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:51:42.763339 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-28 00:51:42.763347 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:51:42.763355 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:51:42.763363 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-28 00:51:42.763371 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:51:42.763385 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:51:42.763393 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-28 00:51:42.763401 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:51:42.763409 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:51:42.763417 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-28 00:51:42.763425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:51:42.763433 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:51:42.763441 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-28 00:51:42.763449 | orchestrator | 2026-02-28 00:51:42.763457 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-28 00:51:42.763465 | orchestrator | Saturday 28 February 2026 00:51:40 +0000 (0:00:19.782) 0:05:16.647 ***** 2026-02-28 00:51:42.763473 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.763481 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.763489 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.763497 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.763505 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.763513 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.763521 | orchestrator | 2026-02-28 00:51:42.763529 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-28 00:51:42.763537 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:01.012) 0:05:17.659 ***** 2026-02-28 00:51:42.763545 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:51:42.763553 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:51:42.763561 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:51:42.763569 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:51:42.763577 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:51:42.763585 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:51:42.763592 | orchestrator | 2026-02-28 00:51:42.763600 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:51:42.763609 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:51:42.763619 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-28 00:51:42.763628 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 00:51:42.763636 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 00:51:42.763644 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:51:42.763652 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:51:42.763665 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 00:51:42.763673 | orchestrator | 2026-02-28 00:51:42.763681 | orchestrator | 2026-02-28 00:51:42.763689 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:51:42.763701 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:00.526) 0:05:18.186 ***** 2026-02-28 00:51:42.763715 | orchestrator | =============================================================================== 2026-02-28 00:51:42.763723 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.71s 2026-02-28 00:51:42.763731 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.28s 2026-02-28 00:51:42.763739 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.87s 2026-02-28 00:51:42.763747 | orchestrator | Manage labels ---------------------------------------------------------- 19.78s 2026-02-28 00:51:42.763755 | orchestrator | kubectl : Install required packages ------------------------------------ 18.45s 2026-02-28 00:51:42.763763 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.72s 2026-02-28 00:51:42.763771 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 11.63s 2026-02-28 00:51:42.763779 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 7.31s 2026-02-28 00:51:42.763787 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.28s 2026-02-28 00:51:42.763795 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.16s 2026-02-28 00:51:42.763803 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 4.14s 2026-02-28 00:51:42.763810 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.72s 2026-02-28 00:51:42.763818 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 3.09s 2026-02-28 00:51:42.763826 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.08s 2026-02-28 00:51:42.763834 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 3.01s 2026-02-28 00:51:42.763842 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.78s 2026-02-28 00:51:42.763850 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.70s 2026-02-28 00:51:42.763858 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.60s 2026-02-28 00:51:42.763866 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.54s 2026-02-28 00:51:42.763874 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.37s 2026-02-28 00:51:42.763882 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:42.763891 | orchestrator | 2026-02-28 00:51:42 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:42.763899 | orchestrator | 2026-02-28 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:45.820610 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:45.823179 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:45.829113 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 70b6af4b-0089-4da4-b802-cfcf7083a9af is in state STARTED 2026-02-28 00:51:45.831245 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:45.833145 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 5a83dde1-1351-465d-9c9b-be3a968d18a1 is in state STARTED 2026-02-28 00:51:45.835830 | orchestrator | 2026-02-28 00:51:45 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:45.835903 | orchestrator | 2026-02-28 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:48.893483 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:48.893608 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:48.895194 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 70b6af4b-0089-4da4-b802-cfcf7083a9af is in state STARTED 2026-02-28 00:51:48.895323 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:48.896740 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 5a83dde1-1351-465d-9c9b-be3a968d18a1 is in state STARTED 2026-02-28 00:51:48.897710 | orchestrator | 2026-02-28 00:51:48 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:48.897739 | orchestrator | 2026-02-28 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:52.022227 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:52.022412 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:52.022429 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 70b6af4b-0089-4da4-b802-cfcf7083a9af is in state SUCCESS 2026-02-28 00:51:52.022442 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:52.022453 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 5a83dde1-1351-465d-9c9b-be3a968d18a1 is in state STARTED 2026-02-28 00:51:52.022464 | orchestrator | 2026-02-28 00:51:51 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:52.022476 | orchestrator | 2026-02-28 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:55.033805 | orchestrator | 2026-02-28 00:51:55 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:55.034124 | orchestrator | 2026-02-28 00:51:55 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:55.037164 | orchestrator | 2026-02-28 00:51:55 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:55.040108 | orchestrator | 2026-02-28 00:51:55 | INFO  | Task 5a83dde1-1351-465d-9c9b-be3a968d18a1 is in state STARTED 2026-02-28 00:51:55.040148 | orchestrator | 2026-02-28 00:51:55 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:55.040160 | orchestrator | 2026-02-28 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:51:58.074676 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:51:58.075284 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:51:58.076999 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:51:58.079880 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task 5a83dde1-1351-465d-9c9b-be3a968d18a1 is in state SUCCESS 2026-02-28 00:51:58.080862 | orchestrator | 2026-02-28 00:51:58 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:51:58.080896 | orchestrator | 2026-02-28 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:01.119353 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:01.120044 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:01.121192 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:52:01.122381 | orchestrator | 2026-02-28 00:52:01 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:01.122469 | orchestrator | 2026-02-28 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:04.163389 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:04.163794 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:04.167426 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:52:04.168501 | orchestrator | 2026-02-28 00:52:04 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:04.168550 | orchestrator | 2026-02-28 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:07.197743 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:07.199074 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:07.201222 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state STARTED 2026-02-28 00:52:07.203179 | orchestrator | 2026-02-28 00:52:07 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:07.204433 | orchestrator | 2026-02-28 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:10.251305 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:10.254896 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:10.257342 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task 5cbfe9b6-b01a-4215-9823-8111627b3c54 is in state SUCCESS 2026-02-28 00:52:10.259699 | orchestrator | 2026-02-28 00:52:10.259756 | orchestrator | 2026-02-28 00:52:10.259768 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-28 00:52:10.259777 | orchestrator | 2026-02-28 00:52:10.259785 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:52:10.259794 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.290) 0:00:00.290 ***** 2026-02-28 00:52:10.259803 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:52:10.259811 | orchestrator | 2026-02-28 00:52:10.259819 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:52:10.259827 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:00.937) 0:00:01.228 ***** 2026-02-28 00:52:10.259835 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:10.259844 | orchestrator | 2026-02-28 00:52:10.259855 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-28 00:52:10.259868 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:01.388) 0:00:02.616 ***** 2026-02-28 00:52:10.259899 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:10.259916 | orchestrator | 2026-02-28 00:52:10.259930 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:10.259943 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:52:10.259958 | orchestrator | 2026-02-28 00:52:10.259970 | orchestrator | 2026-02-28 00:52:10.259983 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:10.259997 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.549) 0:00:03.165 ***** 2026-02-28 00:52:10.260010 | orchestrator | =============================================================================== 2026-02-28 00:52:10.260023 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.39s 2026-02-28 00:52:10.260036 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2026-02-28 00:52:10.260078 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.55s 2026-02-28 00:52:10.260092 | orchestrator | 2026-02-28 00:52:10.260105 | orchestrator | 2026-02-28 00:52:10.260119 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-28 00:52:10.260134 | orchestrator | 2026-02-28 00:52:10.260144 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-28 00:52:10.260153 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.227) 0:00:00.227 ***** 2026-02-28 00:52:10.260161 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:10.260169 | orchestrator | 2026-02-28 00:52:10.260177 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-28 00:52:10.260185 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.686) 0:00:00.913 ***** 2026-02-28 00:52:10.260193 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:10.260201 | orchestrator | 2026-02-28 00:52:10.260210 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-28 00:52:10.260217 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:00.702) 0:00:01.616 ***** 2026-02-28 00:52:10.260225 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-28 00:52:10.260233 | orchestrator | 2026-02-28 00:52:10.260241 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-28 00:52:10.260249 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.894) 0:00:02.510 ***** 2026-02-28 00:52:10.260257 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:10.260284 | orchestrator | 2026-02-28 00:52:10.260295 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-28 00:52:10.260304 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:01.906) 0:00:04.417 ***** 2026-02-28 00:52:10.260313 | orchestrator | changed: [testbed-manager] 2026-02-28 00:52:10.260322 | orchestrator | 2026-02-28 00:52:10.260331 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-28 00:52:10.260340 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.570) 0:00:04.988 ***** 2026-02-28 00:52:10.260349 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:52:10.260358 | orchestrator | 2026-02-28 00:52:10.260367 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-28 00:52:10.260376 | orchestrator | Saturday 28 February 2026 00:51:55 +0000 (0:00:01.986) 0:00:06.974 ***** 2026-02-28 00:52:10.260385 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 00:52:10.260394 | orchestrator | 2026-02-28 00:52:10.260403 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-28 00:52:10.260412 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:01.108) 0:00:08.082 ***** 2026-02-28 00:52:10.260420 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:10.260429 | orchestrator | 2026-02-28 00:52:10.260438 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-28 00:52:10.260446 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:00.463) 0:00:08.546 ***** 2026-02-28 00:52:10.260456 | orchestrator | ok: [testbed-manager] 2026-02-28 00:52:10.260465 | orchestrator | 2026-02-28 00:52:10.260473 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:10.260483 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:52:10.260491 | orchestrator | 2026-02-28 00:52:10.260500 | orchestrator | 2026-02-28 00:52:10.260509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:10.260518 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:00.335) 0:00:08.881 ***** 2026-02-28 00:52:10.260527 | orchestrator | =============================================================================== 2026-02-28 00:52:10.260536 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.99s 2026-02-28 00:52:10.260545 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.91s 2026-02-28 00:52:10.260575 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.11s 2026-02-28 00:52:10.260599 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2026-02-28 00:52:10.260608 | orchestrator | Create .kube directory -------------------------------------------------- 0.70s 2026-02-28 00:52:10.260617 | orchestrator | Get home directory of operator user ------------------------------------- 0.69s 2026-02-28 00:52:10.260627 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.57s 2026-02-28 00:52:10.260635 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.46s 2026-02-28 00:52:10.260644 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2026-02-28 00:52:10.260653 | orchestrator | 2026-02-28 00:52:10.260661 | orchestrator | 2026-02-28 00:52:10.260669 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-28 00:52:10.260676 | orchestrator | 2026-02-28 00:52:10.260684 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-28 00:52:10.260692 | orchestrator | Saturday 28 February 2026 00:49:28 +0000 (0:00:00.393) 0:00:00.393 ***** 2026-02-28 00:52:10.260700 | orchestrator | ok: [localhost] => { 2026-02-28 00:52:10.260709 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-28 00:52:10.260717 | orchestrator | } 2026-02-28 00:52:10.260725 | orchestrator | 2026-02-28 00:52:10.260733 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-28 00:52:10.260741 | orchestrator | Saturday 28 February 2026 00:49:28 +0000 (0:00:00.215) 0:00:00.609 ***** 2026-02-28 00:52:10.260750 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-28 00:52:10.260760 | orchestrator | ...ignoring 2026-02-28 00:52:10.260768 | orchestrator | 2026-02-28 00:52:10.260776 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-28 00:52:10.260784 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:04.644) 0:00:05.253 ***** 2026-02-28 00:52:10.260793 | orchestrator | skipping: [localhost] 2026-02-28 00:52:10.260807 | orchestrator | 2026-02-28 00:52:10.260820 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-28 00:52:10.260832 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.098) 0:00:05.352 ***** 2026-02-28 00:52:10.260845 | orchestrator | ok: [localhost] 2026-02-28 00:52:10.260869 | orchestrator | 2026-02-28 00:52:10.260883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:52:10.260896 | orchestrator | 2026-02-28 00:52:10.260909 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:52:10.260921 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.225) 0:00:05.577 ***** 2026-02-28 00:52:10.260929 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:10.260937 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:10.260945 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:10.260953 | orchestrator | 2026-02-28 00:52:10.260966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:52:10.260978 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:00.467) 0:00:06.045 ***** 2026-02-28 00:52:10.260992 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-28 00:52:10.261006 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-28 00:52:10.261020 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-28 00:52:10.261028 | orchestrator | 2026-02-28 00:52:10.261036 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-28 00:52:10.261044 | orchestrator | 2026-02-28 00:52:10.261052 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:52:10.261060 | orchestrator | Saturday 28 February 2026 00:49:35 +0000 (0:00:00.917) 0:00:06.962 ***** 2026-02-28 00:52:10.261068 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:10.261090 | orchestrator | 2026-02-28 00:52:10.261103 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-28 00:52:10.261115 | orchestrator | Saturday 28 February 2026 00:49:36 +0000 (0:00:01.550) 0:00:08.512 ***** 2026-02-28 00:52:10.261128 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:10.261140 | orchestrator | 2026-02-28 00:52:10.261153 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-28 00:52:10.261167 | orchestrator | Saturday 28 February 2026 00:49:37 +0000 (0:00:01.186) 0:00:09.699 ***** 2026-02-28 00:52:10.261181 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.261195 | orchestrator | 2026-02-28 00:52:10.261208 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-28 00:52:10.261222 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:00.540) 0:00:10.239 ***** 2026-02-28 00:52:10.261232 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.261239 | orchestrator | 2026-02-28 00:52:10.261247 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-28 00:52:10.261255 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:00.463) 0:00:10.702 ***** 2026-02-28 00:52:10.261337 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.261356 | orchestrator | 2026-02-28 00:52:10.261370 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-28 00:52:10.261384 | orchestrator | Saturday 28 February 2026 00:49:39 +0000 (0:00:00.686) 0:00:11.389 ***** 2026-02-28 00:52:10.261398 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.261411 | orchestrator | 2026-02-28 00:52:10.261425 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:52:10.261435 | orchestrator | Saturday 28 February 2026 00:49:41 +0000 (0:00:01.615) 0:00:13.005 ***** 2026-02-28 00:52:10.261444 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:10.261457 | orchestrator | 2026-02-28 00:52:10.261480 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-28 00:52:10.261707 | orchestrator | Saturday 28 February 2026 00:49:42 +0000 (0:00:01.490) 0:00:14.495 ***** 2026-02-28 00:52:10.261729 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:10.261743 | orchestrator | 2026-02-28 00:52:10.261756 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-28 00:52:10.261770 | orchestrator | Saturday 28 February 2026 00:49:44 +0000 (0:00:02.021) 0:00:16.517 ***** 2026-02-28 00:52:10.261784 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.261798 | orchestrator | 2026-02-28 00:52:10.261809 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-28 00:52:10.261818 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:01.704) 0:00:18.221 ***** 2026-02-28 00:52:10.261826 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.261834 | orchestrator | 2026-02-28 00:52:10.261842 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-28 00:52:10.261850 | orchestrator | Saturday 28 February 2026 00:49:46 +0000 (0:00:00.500) 0:00:18.721 ***** 2026-02-28 00:52:10.261863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.261903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.261914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.261923 | orchestrator | 2026-02-28 00:52:10.261932 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-28 00:52:10.261940 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:01.577) 0:00:20.299 ***** 2026-02-28 00:52:10.261966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.261976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.261995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.262009 | orchestrator | 2026-02-28 00:52:10.262082 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-28 00:52:10.262091 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:02.907) 0:00:23.208 ***** 2026-02-28 00:52:10.262100 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:52:10.262108 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:52:10.262116 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-28 00:52:10.262125 | orchestrator | 2026-02-28 00:52:10.262133 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-28 00:52:10.262141 | orchestrator | Saturday 28 February 2026 00:49:53 +0000 (0:00:02.378) 0:00:25.590 ***** 2026-02-28 00:52:10.262149 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:52:10.262157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:52:10.262165 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-28 00:52:10.262173 | orchestrator | 2026-02-28 00:52:10.262181 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-28 00:52:10.262209 | orchestrator | Saturday 28 February 2026 00:49:57 +0000 (0:00:03.538) 0:00:29.129 ***** 2026-02-28 00:52:10.262217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:52:10.262226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:52:10.262233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-28 00:52:10.262241 | orchestrator | 2026-02-28 00:52:10.262249 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-28 00:52:10.262257 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:03.096) 0:00:32.226 ***** 2026-02-28 00:52:10.262290 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:52:10.262307 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:52:10.262315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-28 00:52:10.262323 | orchestrator | 2026-02-28 00:52:10.262331 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-28 00:52:10.262341 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:02.617) 0:00:34.843 ***** 2026-02-28 00:52:10.262350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:52:10.262360 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:52:10.262369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-28 00:52:10.262379 | orchestrator | 2026-02-28 00:52:10.262389 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-28 00:52:10.262398 | orchestrator | Saturday 28 February 2026 00:50:05 +0000 (0:00:02.429) 0:00:37.273 ***** 2026-02-28 00:52:10.262407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:52:10.262416 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:52:10.262426 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-28 00:52:10.262435 | orchestrator | 2026-02-28 00:52:10.262444 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-28 00:52:10.262453 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:02.622) 0:00:39.895 ***** 2026-02-28 00:52:10.262462 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.262472 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:10.262481 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:10.262490 | orchestrator | 2026-02-28 00:52:10.262499 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-28 00:52:10.262509 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:00.689) 0:00:40.585 ***** 2026-02-28 00:52:10.262519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.262540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.262558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:52:10.262568 | orchestrator | 2026-02-28 00:52:10.262578 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-28 00:52:10.262586 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:02.152) 0:00:42.737 ***** 2026-02-28 00:52:10.262594 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:10.262602 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:10.262610 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:10.262618 | orchestrator | 2026-02-28 00:52:10.262626 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-28 00:52:10.262635 | orchestrator | Saturday 28 February 2026 00:50:12 +0000 (0:00:01.678) 0:00:44.416 ***** 2026-02-28 00:52:10.262643 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:10.262651 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:10.262659 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:10.262667 | orchestrator | 2026-02-28 00:52:10.262675 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-28 00:52:10.262683 | orchestrator | Saturday 28 February 2026 00:50:21 +0000 (0:00:08.888) 0:00:53.305 ***** 2026-02-28 00:52:10.262691 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:10.262699 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:10.262707 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:10.262715 | orchestrator | 2026-02-28 00:52:10.262723 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:52:10.262731 | orchestrator | 2026-02-28 00:52:10.262739 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:52:10.262747 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.728) 0:00:54.033 ***** 2026-02-28 00:52:10.262755 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:10.262763 | orchestrator | 2026-02-28 00:52:10.262771 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:52:10.262780 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:00.734) 0:00:54.768 ***** 2026-02-28 00:52:10.262788 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:52:10.262796 | orchestrator | 2026-02-28 00:52:10.262803 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:52:10.262812 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:00.435) 0:00:55.203 ***** 2026-02-28 00:52:10.262820 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:10.262828 | orchestrator | 2026-02-28 00:52:10.262836 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:52:10.262844 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:08.229) 0:01:03.433 ***** 2026-02-28 00:52:10.262858 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:52:10.262866 | orchestrator | 2026-02-28 00:52:10.262874 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:52:10.262882 | orchestrator | 2026-02-28 00:52:10.262890 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:52:10.262898 | orchestrator | Saturday 28 February 2026 00:51:23 +0000 (0:00:51.751) 0:01:55.184 ***** 2026-02-28 00:52:10.262906 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:10.262914 | orchestrator | 2026-02-28 00:52:10.262922 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:52:10.262930 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:01.027) 0:01:56.212 ***** 2026-02-28 00:52:10.262938 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:52:10.262946 | orchestrator | 2026-02-28 00:52:10.262954 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:52:10.262962 | orchestrator | Saturday 28 February 2026 00:51:24 +0000 (0:00:00.339) 0:01:56.552 ***** 2026-02-28 00:52:10.262970 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:10.262978 | orchestrator | 2026-02-28 00:52:10.262986 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:52:10.262994 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:02.493) 0:01:59.045 ***** 2026-02-28 00:52:10.263002 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:52:10.263010 | orchestrator | 2026-02-28 00:52:10.263018 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-28 00:52:10.263026 | orchestrator | 2026-02-28 00:52:10.263034 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-28 00:52:10.263042 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:18.324) 0:02:17.370 ***** 2026-02-28 00:52:10.263054 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:10.263062 | orchestrator | 2026-02-28 00:52:10.263075 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-28 00:52:10.263083 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:01.157) 0:02:18.527 ***** 2026-02-28 00:52:10.263091 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:52:10.263099 | orchestrator | 2026-02-28 00:52:10.263107 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-28 00:52:10.263115 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:00.774) 0:02:19.302 ***** 2026-02-28 00:52:10.263123 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:10.263131 | orchestrator | 2026-02-28 00:52:10.263139 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-28 00:52:10.263148 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:02.368) 0:02:21.670 ***** 2026-02-28 00:52:10.263156 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:52:10.263164 | orchestrator | 2026-02-28 00:52:10.263172 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-28 00:52:10.263180 | orchestrator | 2026-02-28 00:52:10.263188 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-28 00:52:10.263196 | orchestrator | Saturday 28 February 2026 00:52:05 +0000 (0:00:15.806) 0:02:37.476 ***** 2026-02-28 00:52:10.263203 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:52:10.263211 | orchestrator | 2026-02-28 00:52:10.263219 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-28 00:52:10.263227 | orchestrator | Saturday 28 February 2026 00:52:06 +0000 (0:00:00.714) 0:02:38.191 ***** 2026-02-28 00:52:10.263235 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:52:10.263243 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:52:10.263251 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:52:10.263260 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-28 00:52:10.263282 | orchestrator | enable_outward_rabbitmq_True 2026-02-28 00:52:10.263290 | orchestrator | 2026-02-28 00:52:10.263298 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-28 00:52:10.263312 | orchestrator | skipping: no hosts matched 2026-02-28 00:52:10.263320 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-28 00:52:10.263328 | orchestrator | outward_rabbitmq_restart 2026-02-28 00:52:10.263337 | orchestrator | 2026-02-28 00:52:10.263345 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-28 00:52:10.263353 | orchestrator | skipping: no hosts matched 2026-02-28 00:52:10.263361 | orchestrator | 2026-02-28 00:52:10.263369 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-28 00:52:10.263377 | orchestrator | skipping: no hosts matched 2026-02-28 00:52:10.263385 | orchestrator | 2026-02-28 00:52:10.263393 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:52:10.263401 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-28 00:52:10.263410 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 00:52:10.263418 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:52:10.263426 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 00:52:10.263435 | orchestrator | 2026-02-28 00:52:10.263443 | orchestrator | 2026-02-28 00:52:10.263451 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:52:10.263459 | orchestrator | Saturday 28 February 2026 00:52:09 +0000 (0:00:02.733) 0:02:40.924 ***** 2026-02-28 00:52:10.263467 | orchestrator | =============================================================================== 2026-02-28 00:52:10.263475 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.88s 2026-02-28 00:52:10.263483 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 13.09s 2026-02-28 00:52:10.263491 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.89s 2026-02-28 00:52:10.263499 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.65s 2026-02-28 00:52:10.263507 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.54s 2026-02-28 00:52:10.263515 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.10s 2026-02-28 00:52:10.263523 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.92s 2026-02-28 00:52:10.263532 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.91s 2026-02-28 00:52:10.263540 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.73s 2026-02-28 00:52:10.263548 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.62s 2026-02-28 00:52:10.263556 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.62s 2026-02-28 00:52:10.263564 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.43s 2026-02-28 00:52:10.263572 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.38s 2026-02-28 00:52:10.263580 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.15s 2026-02-28 00:52:10.263587 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.02s 2026-02-28 00:52:10.263595 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.70s 2026-02-28 00:52:10.263608 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.68s 2026-02-28 00:52:10.263622 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.62s 2026-02-28 00:52:10.263630 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.58s 2026-02-28 00:52:10.263639 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.55s 2026-02-28 00:52:10.263653 | orchestrator | 2026-02-28 00:52:10 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:10.263661 | orchestrator | 2026-02-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:13.307649 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:13.309777 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:13.310802 | orchestrator | 2026-02-28 00:52:13 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:13.310894 | orchestrator | 2026-02-28 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:16.351409 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:16.352513 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:16.353440 | orchestrator | 2026-02-28 00:52:16 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:16.353474 | orchestrator | 2026-02-28 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:19.405512 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:19.406347 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:19.411059 | orchestrator | 2026-02-28 00:52:19 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:19.417481 | orchestrator | 2026-02-28 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:22.468757 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:22.470223 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:22.470749 | orchestrator | 2026-02-28 00:52:22 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:22.470770 | orchestrator | 2026-02-28 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:25.512913 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:25.513672 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:25.515846 | orchestrator | 2026-02-28 00:52:25 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:25.515926 | orchestrator | 2026-02-28 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:28.562162 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:28.564794 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:28.568766 | orchestrator | 2026-02-28 00:52:28 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:28.568825 | orchestrator | 2026-02-28 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:31.600355 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:31.604790 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:31.604870 | orchestrator | 2026-02-28 00:52:31 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:31.604915 | orchestrator | 2026-02-28 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:34.645915 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:34.646460 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:34.647679 | orchestrator | 2026-02-28 00:52:34 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:34.647748 | orchestrator | 2026-02-28 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:37.677886 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:37.678562 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:37.679715 | orchestrator | 2026-02-28 00:52:37 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:37.679741 | orchestrator | 2026-02-28 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:40.716912 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:40.719484 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:40.721508 | orchestrator | 2026-02-28 00:52:40 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:40.721566 | orchestrator | 2026-02-28 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:43.753470 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:43.754691 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:43.756256 | orchestrator | 2026-02-28 00:52:43 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:43.756285 | orchestrator | 2026-02-28 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:46.808184 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:46.811906 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:46.813463 | orchestrator | 2026-02-28 00:52:46 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:46.813494 | orchestrator | 2026-02-28 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:49.882104 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:49.885107 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:49.885618 | orchestrator | 2026-02-28 00:52:49 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:49.885654 | orchestrator | 2026-02-28 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:52.932196 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:52.934882 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:52.938699 | orchestrator | 2026-02-28 00:52:52 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:52.938965 | orchestrator | 2026-02-28 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:55.983723 | orchestrator | 2026-02-28 00:52:55 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:55.983975 | orchestrator | 2026-02-28 00:52:55 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:55.985236 | orchestrator | 2026-02-28 00:52:55 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:55.985260 | orchestrator | 2026-02-28 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:52:59.047488 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:52:59.047585 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:52:59.047599 | orchestrator | 2026-02-28 00:52:59 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:52:59.047609 | orchestrator | 2026-02-28 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:02.079308 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:53:02.080775 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:02.083374 | orchestrator | 2026-02-28 00:53:02 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:02.083659 | orchestrator | 2026-02-28 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:05.119415 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:53:05.120199 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:05.121642 | orchestrator | 2026-02-28 00:53:05 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:05.121667 | orchestrator | 2026-02-28 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:08.163557 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state STARTED 2026-02-28 00:53:08.165664 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:08.167231 | orchestrator | 2026-02-28 00:53:08 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:08.167298 | orchestrator | 2026-02-28 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:11.207696 | orchestrator | 2026-02-28 00:53:11.207771 | orchestrator | 2026-02-28 00:53:11.207782 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:53:11.207789 | orchestrator | 2026-02-28 00:53:11.207796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:53:11.207805 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:00.241) 0:00:00.241 ***** 2026-02-28 00:53:11.207816 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:53:11.207831 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:53:11.207843 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:53:11.207854 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.207864 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.207874 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.207884 | orchestrator | 2026-02-28 00:53:11.207894 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:53:11.207905 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:00.825) 0:00:01.066 ***** 2026-02-28 00:53:11.207915 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-28 00:53:11.207946 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-28 00:53:11.207958 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-28 00:53:11.207968 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-28 00:53:11.207979 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-28 00:53:11.207990 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-28 00:53:11.207997 | orchestrator | 2026-02-28 00:53:11.208051 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-28 00:53:11.208062 | orchestrator | 2026-02-28 00:53:11.208070 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-28 00:53:11.208076 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.826) 0:00:01.893 ***** 2026-02-28 00:53:11.208083 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.208090 | orchestrator | 2026-02-28 00:53:11.208096 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-28 00:53:11.208102 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:01.507) 0:00:03.400 ***** 2026-02-28 00:53:11.208110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208178 | orchestrator | 2026-02-28 00:53:11.208495 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-28 00:53:11.208529 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:01.947) 0:00:05.348 ***** 2026-02-28 00:53:11.208542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208591 | orchestrator | 2026-02-28 00:53:11.208597 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-28 00:53:11.208604 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:02.238) 0:00:07.587 ***** 2026-02-28 00:53:11.208615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208666 | orchestrator | 2026-02-28 00:53:11.208672 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-28 00:53:11.208690 | orchestrator | Saturday 28 February 2026 00:50:40 +0000 (0:00:01.535) 0:00:09.123 ***** 2026-02-28 00:53:11.208701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208748 | orchestrator | 2026-02-28 00:53:11.208758 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-28 00:53:11.208765 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:02.131) 0:00:11.254 ***** 2026-02-28 00:53:11.208771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.208810 | orchestrator | 2026-02-28 00:53:11.208816 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-28 00:53:11.208822 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:01.814) 0:00:13.069 ***** 2026-02-28 00:53:11.208829 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:53:11.208836 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:53:11.208842 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:53:11.208852 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.208858 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.208864 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.208870 | orchestrator | 2026-02-28 00:53:11.208879 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-28 00:53:11.208886 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:02.777) 0:00:15.846 ***** 2026-02-28 00:53:11.208892 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-28 00:53:11.208899 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-28 00:53:11.208905 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-28 00:53:11.208911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-28 00:53:11.208918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-28 00:53:11.208924 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.208930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-28 00:53:11.208936 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.208946 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.208952 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.208958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.208965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-28 00:53:11.208971 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.208978 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.208988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.209002 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.209016 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.209027 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.209037 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-28 00:53:11.209048 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.209058 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.209070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.209082 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.209093 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.209100 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-28 00:53:11.209107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.209120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.209127 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.209135 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.209142 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.209149 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-28 00:53:11.209156 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.209163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.209170 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:53:11.209177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.209185 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.209192 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-28 00:53:11.209203 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:53:11.209211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:53:11.209218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:53:11.209225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-28 00:53:11.209232 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-28 00:53:11.209240 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-28 00:53:11.209246 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-28 00:53:11.209257 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-28 00:53:11.209264 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:53:11.209270 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-28 00:53:11.209276 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-28 00:53:11.209283 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-28 00:53:11.209289 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:53:11.209295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:53:11.209301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:53:11.209308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-28 00:53:11.209314 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-28 00:53:11.209324 | orchestrator | 2026-02-28 00:53:11.209330 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.209336 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:23.094) 0:00:38.940 ***** 2026-02-28 00:53:11.209358 | orchestrator | 2026-02-28 00:53:11.209366 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.209372 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:00.212) 0:00:39.153 ***** 2026-02-28 00:53:11.209378 | orchestrator | 2026-02-28 00:53:11.209384 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.209391 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:00.165) 0:00:39.319 ***** 2026-02-28 00:53:11.209397 | orchestrator | 2026-02-28 00:53:11.209403 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.209409 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:00.070) 0:00:39.390 ***** 2026-02-28 00:53:11.209416 | orchestrator | 2026-02-28 00:53:11.209506 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.209522 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:00.084) 0:00:39.474 ***** 2026-02-28 00:53:11.209532 | orchestrator | 2026-02-28 00:53:11.209543 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-28 00:53:11.209554 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:00.070) 0:00:39.544 ***** 2026-02-28 00:53:11.209564 | orchestrator | 2026-02-28 00:53:11.209575 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-28 00:53:11.209587 | orchestrator | Saturday 28 February 2026 00:51:11 +0000 (0:00:00.135) 0:00:39.680 ***** 2026-02-28 00:53:11.209597 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:53:11.209607 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:53:11.209613 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.209620 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.209626 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.209632 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:53:11.209638 | orchestrator | 2026-02-28 00:53:11.209645 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-28 00:53:11.209651 | orchestrator | Saturday 28 February 2026 00:51:13 +0000 (0:00:02.224) 0:00:41.904 ***** 2026-02-28 00:53:11.209657 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.209664 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:53:11.209670 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:53:11.209676 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.209682 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:53:11.209688 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.209695 | orchestrator | 2026-02-28 00:53:11.209701 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-28 00:53:11.209707 | orchestrator | 2026-02-28 00:53:11.209714 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:53:11.209724 | orchestrator | Saturday 28 February 2026 00:51:39 +0000 (0:00:25.804) 0:01:07.709 ***** 2026-02-28 00:53:11.209731 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.209737 | orchestrator | 2026-02-28 00:53:11.209743 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:53:11.209749 | orchestrator | Saturday 28 February 2026 00:51:40 +0000 (0:00:01.273) 0:01:08.983 ***** 2026-02-28 00:53:11.209756 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.209762 | orchestrator | 2026-02-28 00:53:11.209768 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-28 00:53:11.209775 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:00.791) 0:01:09.775 ***** 2026-02-28 00:53:11.209781 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.209787 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.209793 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.209806 | orchestrator | 2026-02-28 00:53:11.209812 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-28 00:53:11.209818 | orchestrator | Saturday 28 February 2026 00:51:42 +0000 (0:00:01.198) 0:01:10.974 ***** 2026-02-28 00:53:11.209825 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.209831 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.209837 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.209849 | orchestrator | 2026-02-28 00:53:11.209855 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-28 00:53:11.209862 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:00.725) 0:01:11.700 ***** 2026-02-28 00:53:11.209868 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.209874 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.209880 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.209887 | orchestrator | 2026-02-28 00:53:11.209893 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-28 00:53:11.209899 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:00.749) 0:01:12.450 ***** 2026-02-28 00:53:11.209906 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.209912 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.209918 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.209924 | orchestrator | 2026-02-28 00:53:11.209931 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-28 00:53:11.209937 | orchestrator | Saturday 28 February 2026 00:51:44 +0000 (0:00:00.911) 0:01:13.361 ***** 2026-02-28 00:53:11.209943 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.209949 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.209956 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.209962 | orchestrator | 2026-02-28 00:53:11.209968 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-28 00:53:11.209975 | orchestrator | Saturday 28 February 2026 00:51:45 +0000 (0:00:01.210) 0:01:14.572 ***** 2026-02-28 00:53:11.209981 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.209987 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.209993 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210000 | orchestrator | 2026-02-28 00:53:11.210006 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-28 00:53:11.210059 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:01.077) 0:01:15.650 ***** 2026-02-28 00:53:11.210068 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210074 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210081 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210087 | orchestrator | 2026-02-28 00:53:11.210093 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-28 00:53:11.210099 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:00.679) 0:01:16.329 ***** 2026-02-28 00:53:11.210106 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210112 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210118 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210124 | orchestrator | 2026-02-28 00:53:11.210131 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-28 00:53:11.210137 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.545) 0:01:16.874 ***** 2026-02-28 00:53:11.210143 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210150 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210156 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210162 | orchestrator | 2026-02-28 00:53:11.210168 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-28 00:53:11.210175 | orchestrator | Saturday 28 February 2026 00:51:48 +0000 (0:00:00.692) 0:01:17.567 ***** 2026-02-28 00:53:11.210181 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210187 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210194 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210200 | orchestrator | 2026-02-28 00:53:11.210206 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-28 00:53:11.210220 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:00.362) 0:01:17.929 ***** 2026-02-28 00:53:11.210226 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210233 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210239 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210245 | orchestrator | 2026-02-28 00:53:11.210251 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-28 00:53:11.210258 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:00.377) 0:01:18.307 ***** 2026-02-28 00:53:11.210264 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210270 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210276 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210283 | orchestrator | 2026-02-28 00:53:11.210289 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-28 00:53:11.210295 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.531) 0:01:18.838 ***** 2026-02-28 00:53:11.210301 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210308 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210314 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210320 | orchestrator | 2026-02-28 00:53:11.210326 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-28 00:53:11.210336 | orchestrator | Saturday 28 February 2026 00:51:50 +0000 (0:00:00.616) 0:01:19.455 ***** 2026-02-28 00:53:11.210379 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210387 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210393 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210399 | orchestrator | 2026-02-28 00:53:11.210405 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-28 00:53:11.210411 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.383) 0:01:19.839 ***** 2026-02-28 00:53:11.210418 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210424 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210430 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210436 | orchestrator | 2026-02-28 00:53:11.210442 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-28 00:53:11.210449 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.323) 0:01:20.163 ***** 2026-02-28 00:53:11.210455 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210461 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210467 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210473 | orchestrator | 2026-02-28 00:53:11.210480 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-28 00:53:11.210486 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:00.325) 0:01:20.489 ***** 2026-02-28 00:53:11.210492 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210498 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210509 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210515 | orchestrator | 2026-02-28 00:53:11.210522 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-28 00:53:11.210528 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:00.327) 0:01:20.816 ***** 2026-02-28 00:53:11.210534 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:53:11.210541 | orchestrator | 2026-02-28 00:53:11.210547 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-28 00:53:11.210553 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.907) 0:01:21.723 ***** 2026-02-28 00:53:11.210559 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.210565 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.210572 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.210578 | orchestrator | 2026-02-28 00:53:11.210584 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-28 00:53:11.210590 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.770) 0:01:22.494 ***** 2026-02-28 00:53:11.210601 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.210607 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.210613 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.210619 | orchestrator | 2026-02-28 00:53:11.210626 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-28 00:53:11.210632 | orchestrator | Saturday 28 February 2026 00:51:54 +0000 (0:00:00.717) 0:01:23.211 ***** 2026-02-28 00:53:11.210638 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210644 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210650 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210656 | orchestrator | 2026-02-28 00:53:11.210663 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-28 00:53:11.210669 | orchestrator | Saturday 28 February 2026 00:51:55 +0000 (0:00:01.154) 0:01:24.366 ***** 2026-02-28 00:53:11.210675 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210681 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210687 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210693 | orchestrator | 2026-02-28 00:53:11.210700 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-28 00:53:11.210706 | orchestrator | Saturday 28 February 2026 00:51:56 +0000 (0:00:00.729) 0:01:25.096 ***** 2026-02-28 00:53:11.210712 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210718 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210724 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210731 | orchestrator | 2026-02-28 00:53:11.210737 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-28 00:53:11.210743 | orchestrator | Saturday 28 February 2026 00:51:57 +0000 (0:00:00.535) 0:01:25.631 ***** 2026-02-28 00:53:11.210749 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210756 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210762 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210768 | orchestrator | 2026-02-28 00:53:11.210774 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-28 00:53:11.210780 | orchestrator | Saturday 28 February 2026 00:51:57 +0000 (0:00:00.442) 0:01:26.074 ***** 2026-02-28 00:53:11.210787 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210793 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210799 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210805 | orchestrator | 2026-02-28 00:53:11.210811 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-28 00:53:11.210817 | orchestrator | Saturday 28 February 2026 00:51:58 +0000 (0:00:00.687) 0:01:26.761 ***** 2026-02-28 00:53:11.210824 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.210830 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.210836 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.210842 | orchestrator | 2026-02-28 00:53:11.210848 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-28 00:53:11.210854 | orchestrator | Saturday 28 February 2026 00:51:58 +0000 (0:00:00.378) 0:01:27.139 ***** 2026-02-28 00:53:11.210861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210938 | orchestrator | 2026-02-28 00:53:11.210945 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-28 00:53:11.210951 | orchestrator | Saturday 28 February 2026 00:52:00 +0000 (0:00:01.651) 0:01:28.791 ***** 2026-02-28 00:53:11.210957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.210994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211026 | orchestrator | 2026-02-28 00:53:11.211033 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-28 00:53:11.211039 | orchestrator | Saturday 28 February 2026 00:52:05 +0000 (0:00:04.927) 0:01:33.719 ***** 2026-02-28 00:53:11.211045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211114 | orchestrator | 2026-02-28 00:53:11.211120 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.211126 | orchestrator | Saturday 28 February 2026 00:52:07 +0000 (0:00:02.850) 0:01:36.570 ***** 2026-02-28 00:53:11.211133 | orchestrator | 2026-02-28 00:53:11.211139 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.211145 | orchestrator | Saturday 28 February 2026 00:52:08 +0000 (0:00:00.089) 0:01:36.660 ***** 2026-02-28 00:53:11.211151 | orchestrator | 2026-02-28 00:53:11.211158 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.211164 | orchestrator | Saturday 28 February 2026 00:52:08 +0000 (0:00:00.074) 0:01:36.734 ***** 2026-02-28 00:53:11.211170 | orchestrator | 2026-02-28 00:53:11.211176 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-28 00:53:11.211182 | orchestrator | Saturday 28 February 2026 00:52:08 +0000 (0:00:00.076) 0:01:36.811 ***** 2026-02-28 00:53:11.211189 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.211195 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.211201 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.211207 | orchestrator | 2026-02-28 00:53:11.211216 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-28 00:53:11.211232 | orchestrator | Saturday 28 February 2026 00:52:15 +0000 (0:00:07.620) 0:01:44.431 ***** 2026-02-28 00:53:11.211248 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.211260 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.211270 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.211281 | orchestrator | 2026-02-28 00:53:11.211292 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-28 00:53:11.211303 | orchestrator | Saturday 28 February 2026 00:52:22 +0000 (0:00:06.594) 0:01:51.025 ***** 2026-02-28 00:53:11.211314 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.211323 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.211329 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.211335 | orchestrator | 2026-02-28 00:53:11.211352 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-28 00:53:11.211359 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:06.615) 0:01:57.641 ***** 2026-02-28 00:53:11.211366 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.211372 | orchestrator | 2026-02-28 00:53:11.211378 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-28 00:53:11.211384 | orchestrator | Saturday 28 February 2026 00:52:29 +0000 (0:00:00.364) 0:01:58.005 ***** 2026-02-28 00:53:11.211391 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.211397 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.211403 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.211409 | orchestrator | 2026-02-28 00:53:11.211416 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-28 00:53:11.211426 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:00.829) 0:01:58.835 ***** 2026-02-28 00:53:11.211432 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.211438 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.211445 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.211451 | orchestrator | 2026-02-28 00:53:11.211457 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-28 00:53:11.211464 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:00.662) 0:01:59.497 ***** 2026-02-28 00:53:11.211470 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.211476 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.211482 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.211489 | orchestrator | 2026-02-28 00:53:11.211495 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-28 00:53:11.211501 | orchestrator | Saturday 28 February 2026 00:52:31 +0000 (0:00:00.801) 0:02:00.299 ***** 2026-02-28 00:53:11.211507 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.211513 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.211520 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.211526 | orchestrator | 2026-02-28 00:53:11.211532 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-28 00:53:11.211538 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.843) 0:02:01.142 ***** 2026-02-28 00:53:11.211544 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.211551 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.211562 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.211568 | orchestrator | 2026-02-28 00:53:11.211574 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-28 00:53:11.211581 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.729) 0:02:01.871 ***** 2026-02-28 00:53:11.211587 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.211593 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.211599 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.211605 | orchestrator | 2026-02-28 00:53:11.211612 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-28 00:53:11.211618 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.822) 0:02:02.693 ***** 2026-02-28 00:53:11.211624 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.211630 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.211642 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.211648 | orchestrator | 2026-02-28 00:53:11.211655 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-28 00:53:11.211661 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:00.303) 0:02:02.996 ***** 2026-02-28 00:53:11.211668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211681 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211688 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211694 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211711 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211718 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211742 | orchestrator | 2026-02-28 00:53:11.211748 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-28 00:53:11.211755 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:01.742) 0:02:04.739 ***** 2026-02-28 00:53:11.211761 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211768 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211774 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211780 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211823 | orchestrator | 2026-02-28 00:53:11.211829 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-28 00:53:11.211842 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:05.069) 0:02:09.809 ***** 2026-02-28 00:53:11.211853 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211859 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211866 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 00:53:11.211915 | orchestrator | 2026-02-28 00:53:11.211921 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.211927 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:03.089) 0:02:12.898 ***** 2026-02-28 00:53:11.211938 | orchestrator | 2026-02-28 00:53:11.211944 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.211950 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.079) 0:02:12.977 ***** 2026-02-28 00:53:11.211957 | orchestrator | 2026-02-28 00:53:11.211963 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-28 00:53:11.211969 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.079) 0:02:13.057 ***** 2026-02-28 00:53:11.211975 | orchestrator | 2026-02-28 00:53:11.211982 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-28 00:53:11.211988 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.066) 0:02:13.124 ***** 2026-02-28 00:53:11.211994 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.212001 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.212007 | orchestrator | 2026-02-28 00:53:11.212016 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-28 00:53:11.212023 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:06.176) 0:02:19.301 ***** 2026-02-28 00:53:11.212029 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.212035 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.212041 | orchestrator | 2026-02-28 00:53:11.212048 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-28 00:53:11.212054 | orchestrator | Saturday 28 February 2026 00:52:57 +0000 (0:00:06.466) 0:02:25.767 ***** 2026-02-28 00:53:11.212060 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:53:11.212066 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:53:11.212073 | orchestrator | 2026-02-28 00:53:11.212079 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-28 00:53:11.212085 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:06.458) 0:02:32.225 ***** 2026-02-28 00:53:11.212091 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:53:11.212097 | orchestrator | 2026-02-28 00:53:11.212103 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-28 00:53:11.212110 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:00.141) 0:02:32.367 ***** 2026-02-28 00:53:11.212116 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.212122 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.212128 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.212135 | orchestrator | 2026-02-28 00:53:11.212141 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-28 00:53:11.212147 | orchestrator | Saturday 28 February 2026 00:53:04 +0000 (0:00:00.786) 0:02:33.154 ***** 2026-02-28 00:53:11.212153 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.212160 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.212166 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.212172 | orchestrator | 2026-02-28 00:53:11.212178 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-28 00:53:11.212184 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:00.626) 0:02:33.780 ***** 2026-02-28 00:53:11.212191 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.212197 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.212203 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.212209 | orchestrator | 2026-02-28 00:53:11.212216 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-28 00:53:11.212222 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:00.845) 0:02:34.625 ***** 2026-02-28 00:53:11.212228 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:53:11.212234 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:53:11.212241 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:53:11.212247 | orchestrator | 2026-02-28 00:53:11.212253 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-28 00:53:11.212259 | orchestrator | Saturday 28 February 2026 00:53:06 +0000 (0:00:00.651) 0:02:35.277 ***** 2026-02-28 00:53:11.212265 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.212278 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.212284 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.212290 | orchestrator | 2026-02-28 00:53:11.212297 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-28 00:53:11.212303 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:00.736) 0:02:36.013 ***** 2026-02-28 00:53:11.212309 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:53:11.212315 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:53:11.212322 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:53:11.212328 | orchestrator | 2026-02-28 00:53:11.212334 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:53:11.212350 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 00:53:11.212357 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-28 00:53:11.212363 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-28 00:53:11.212370 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:11.212376 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:11.212382 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 00:53:11.212389 | orchestrator | 2026-02-28 00:53:11.212395 | orchestrator | 2026-02-28 00:53:11.212401 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:53:11.212408 | orchestrator | Saturday 28 February 2026 00:53:08 +0000 (0:00:00.958) 0:02:36.972 ***** 2026-02-28 00:53:11.212414 | orchestrator | =============================================================================== 2026-02-28 00:53:11.212420 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.80s 2026-02-28 00:53:11.212426 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.09s 2026-02-28 00:53:11.212432 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.80s 2026-02-28 00:53:11.212439 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.07s 2026-02-28 00:53:11.212445 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.06s 2026-02-28 00:53:11.212451 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.07s 2026-02-28 00:53:11.212457 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.93s 2026-02-28 00:53:11.212466 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2026-02-28 00:53:11.212473 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.85s 2026-02-28 00:53:11.212479 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.78s 2026-02-28 00:53:11.212485 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.24s 2026-02-28 00:53:11.212492 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.22s 2026-02-28 00:53:11.212498 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.13s 2026-02-28 00:53:11.212504 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.95s 2026-02-28 00:53:11.212510 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.81s 2026-02-28 00:53:11.212516 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.74s 2026-02-28 00:53:11.212523 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2026-02-28 00:53:11.212534 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.54s 2026-02-28 00:53:11.212540 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.51s 2026-02-28 00:53:11.212546 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.27s 2026-02-28 00:53:11.212552 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task e47ca0be-0e8d-40fe-94b7-6ba7cfe63dbd is in state SUCCESS 2026-02-28 00:53:11.212559 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:11.212565 | orchestrator | 2026-02-28 00:53:11 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:11.212572 | orchestrator | 2026-02-28 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:14.242929 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:14.243742 | orchestrator | 2026-02-28 00:53:14 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:14.243761 | orchestrator | 2026-02-28 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:17.313100 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:17.319062 | orchestrator | 2026-02-28 00:53:17 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:17.320038 | orchestrator | 2026-02-28 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:20.350339 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:20.352298 | orchestrator | 2026-02-28 00:53:20 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:20.352519 | orchestrator | 2026-02-28 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:23.388972 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:23.391096 | orchestrator | 2026-02-28 00:53:23 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:23.391438 | orchestrator | 2026-02-28 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:26.430957 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:26.431388 | orchestrator | 2026-02-28 00:53:26 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:26.431422 | orchestrator | 2026-02-28 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:29.475861 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:29.476548 | orchestrator | 2026-02-28 00:53:29 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:29.477056 | orchestrator | 2026-02-28 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:32.514257 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:32.516333 | orchestrator | 2026-02-28 00:53:32 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:32.516482 | orchestrator | 2026-02-28 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:35.544571 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:35.546644 | orchestrator | 2026-02-28 00:53:35 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:35.546782 | orchestrator | 2026-02-28 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:38.599040 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:38.602486 | orchestrator | 2026-02-28 00:53:38 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:38.602565 | orchestrator | 2026-02-28 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:41.643549 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:41.646705 | orchestrator | 2026-02-28 00:53:41 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:41.646772 | orchestrator | 2026-02-28 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:44.695928 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:44.697329 | orchestrator | 2026-02-28 00:53:44 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:44.697652 | orchestrator | 2026-02-28 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:47.743579 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:47.745251 | orchestrator | 2026-02-28 00:53:47 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:47.745341 | orchestrator | 2026-02-28 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:50.790192 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:50.793013 | orchestrator | 2026-02-28 00:53:50 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:50.793057 | orchestrator | 2026-02-28 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:53.825596 | orchestrator | 2026-02-28 00:53:53 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:53.827547 | orchestrator | 2026-02-28 00:53:53 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:53.827612 | orchestrator | 2026-02-28 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:56.866806 | orchestrator | 2026-02-28 00:53:56 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:56.867951 | orchestrator | 2026-02-28 00:53:56 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:56.868015 | orchestrator | 2026-02-28 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:53:59.912747 | orchestrator | 2026-02-28 00:53:59 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:53:59.915830 | orchestrator | 2026-02-28 00:53:59 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:53:59.915882 | orchestrator | 2026-02-28 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:02.951834 | orchestrator | 2026-02-28 00:54:02 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:02.953212 | orchestrator | 2026-02-28 00:54:02 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:02.953311 | orchestrator | 2026-02-28 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:05.984064 | orchestrator | 2026-02-28 00:54:05 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:05.986407 | orchestrator | 2026-02-28 00:54:05 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:05.986617 | orchestrator | 2026-02-28 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:09.021808 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:09.022820 | orchestrator | 2026-02-28 00:54:09 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:09.024529 | orchestrator | 2026-02-28 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:12.065959 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:12.066127 | orchestrator | 2026-02-28 00:54:12 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:12.066145 | orchestrator | 2026-02-28 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:15.107424 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:15.108644 | orchestrator | 2026-02-28 00:54:15 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:15.108694 | orchestrator | 2026-02-28 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:18.144872 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:18.144942 | orchestrator | 2026-02-28 00:54:18 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:18.144949 | orchestrator | 2026-02-28 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:21.198256 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:21.201367 | orchestrator | 2026-02-28 00:54:21 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:21.201431 | orchestrator | 2026-02-28 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:24.243204 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:24.244935 | orchestrator | 2026-02-28 00:54:24 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:24.244985 | orchestrator | 2026-02-28 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:27.285904 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:27.287639 | orchestrator | 2026-02-28 00:54:27 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:27.287683 | orchestrator | 2026-02-28 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:30.319399 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:30.321398 | orchestrator | 2026-02-28 00:54:30 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:30.321515 | orchestrator | 2026-02-28 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:33.358298 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:33.358808 | orchestrator | 2026-02-28 00:54:33 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:33.358884 | orchestrator | 2026-02-28 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:36.398900 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:36.399144 | orchestrator | 2026-02-28 00:54:36 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:36.399180 | orchestrator | 2026-02-28 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:39.441238 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:39.441721 | orchestrator | 2026-02-28 00:54:39 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:39.442306 | orchestrator | 2026-02-28 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:42.476724 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:42.478314 | orchestrator | 2026-02-28 00:54:42 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:42.478367 | orchestrator | 2026-02-28 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:45.529542 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:45.531165 | orchestrator | 2026-02-28 00:54:45 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:45.531229 | orchestrator | 2026-02-28 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:48.596850 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:48.598833 | orchestrator | 2026-02-28 00:54:48 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:48.599324 | orchestrator | 2026-02-28 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:51.651613 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:51.652670 | orchestrator | 2026-02-28 00:54:51 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:51.653631 | orchestrator | 2026-02-28 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:54.703935 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:54.706175 | orchestrator | 2026-02-28 00:54:54 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:54.706262 | orchestrator | 2026-02-28 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:54:57.759558 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:54:57.760767 | orchestrator | 2026-02-28 00:54:57 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:54:57.761164 | orchestrator | 2026-02-28 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:00.813207 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:00.815624 | orchestrator | 2026-02-28 00:55:00 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:00.815663 | orchestrator | 2026-02-28 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:03.871942 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:03.876956 | orchestrator | 2026-02-28 00:55:03 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:03.877052 | orchestrator | 2026-02-28 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:06.926091 | orchestrator | 2026-02-28 00:55:06 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:06.927167 | orchestrator | 2026-02-28 00:55:06 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:06.928635 | orchestrator | 2026-02-28 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:09.975927 | orchestrator | 2026-02-28 00:55:09 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:09.979074 | orchestrator | 2026-02-28 00:55:09 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:09.979868 | orchestrator | 2026-02-28 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:13.042235 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:13.042829 | orchestrator | 2026-02-28 00:55:13 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:13.042882 | orchestrator | 2026-02-28 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:16.100435 | orchestrator | 2026-02-28 00:55:16 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:16.101771 | orchestrator | 2026-02-28 00:55:16 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:16.101816 | orchestrator | 2026-02-28 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:19.163445 | orchestrator | 2026-02-28 00:55:19 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:19.165296 | orchestrator | 2026-02-28 00:55:19 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:19.165338 | orchestrator | 2026-02-28 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:22.211828 | orchestrator | 2026-02-28 00:55:22 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:22.213131 | orchestrator | 2026-02-28 00:55:22 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:22.213304 | orchestrator | 2026-02-28 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:25.247661 | orchestrator | 2026-02-28 00:55:25 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:25.249168 | orchestrator | 2026-02-28 00:55:25 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:25.249223 | orchestrator | 2026-02-28 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:28.294808 | orchestrator | 2026-02-28 00:55:28 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:28.297743 | orchestrator | 2026-02-28 00:55:28 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:28.297847 | orchestrator | 2026-02-28 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:31.345734 | orchestrator | 2026-02-28 00:55:31 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:31.347401 | orchestrator | 2026-02-28 00:55:31 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:31.347470 | orchestrator | 2026-02-28 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:34.389014 | orchestrator | 2026-02-28 00:55:34 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:34.390390 | orchestrator | 2026-02-28 00:55:34 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:34.390463 | orchestrator | 2026-02-28 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:37.431030 | orchestrator | 2026-02-28 00:55:37 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:37.433485 | orchestrator | 2026-02-28 00:55:37 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:37.434654 | orchestrator | 2026-02-28 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:40.488952 | orchestrator | 2026-02-28 00:55:40 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:40.489988 | orchestrator | 2026-02-28 00:55:40 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:40.490112 | orchestrator | 2026-02-28 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:43.538260 | orchestrator | 2026-02-28 00:55:43 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:43.540219 | orchestrator | 2026-02-28 00:55:43 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:43.540315 | orchestrator | 2026-02-28 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:46.588040 | orchestrator | 2026-02-28 00:55:46 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:46.588971 | orchestrator | 2026-02-28 00:55:46 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:46.589033 | orchestrator | 2026-02-28 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:49.645815 | orchestrator | 2026-02-28 00:55:49 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:49.646610 | orchestrator | 2026-02-28 00:55:49 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:49.646662 | orchestrator | 2026-02-28 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:52.692643 | orchestrator | 2026-02-28 00:55:52 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:52.695174 | orchestrator | 2026-02-28 00:55:52 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:52.695832 | orchestrator | 2026-02-28 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:55.733645 | orchestrator | 2026-02-28 00:55:55 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:55.735883 | orchestrator | 2026-02-28 00:55:55 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:55.736007 | orchestrator | 2026-02-28 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:55:58.796792 | orchestrator | 2026-02-28 00:55:58 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:55:58.798197 | orchestrator | 2026-02-28 00:55:58 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:55:58.798236 | orchestrator | 2026-02-28 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:01.848623 | orchestrator | 2026-02-28 00:56:01 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:01.850153 | orchestrator | 2026-02-28 00:56:01 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:01.850203 | orchestrator | 2026-02-28 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:04.896558 | orchestrator | 2026-02-28 00:56:04 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:04.897075 | orchestrator | 2026-02-28 00:56:04 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:04.897111 | orchestrator | 2026-02-28 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:07.948165 | orchestrator | 2026-02-28 00:56:07 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:07.949881 | orchestrator | 2026-02-28 00:56:07 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:07.949931 | orchestrator | 2026-02-28 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:10.985493 | orchestrator | 2026-02-28 00:56:10 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:10.987992 | orchestrator | 2026-02-28 00:56:10 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:10.988049 | orchestrator | 2026-02-28 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:14.048831 | orchestrator | 2026-02-28 00:56:14 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:14.051906 | orchestrator | 2026-02-28 00:56:14 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:14.051958 | orchestrator | 2026-02-28 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:17.092149 | orchestrator | 2026-02-28 00:56:17 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:17.093860 | orchestrator | 2026-02-28 00:56:17 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:17.094397 | orchestrator | 2026-02-28 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:20.140221 | orchestrator | 2026-02-28 00:56:20 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:20.142294 | orchestrator | 2026-02-28 00:56:20 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state STARTED 2026-02-28 00:56:20.142369 | orchestrator | 2026-02-28 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:23.186277 | orchestrator | 2026-02-28 00:56:23 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:23.193009 | orchestrator | 2026-02-28 00:56:23 | INFO  | Task 1843396d-11fe-4284-87cf-b9c779822136 is in state SUCCESS 2026-02-28 00:56:23.197081 | orchestrator | 2026-02-28 00:56:23.197207 | orchestrator | 2026-02-28 00:56:23.197222 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:56:23.197235 | orchestrator | 2026-02-28 00:56:23.197246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:56:23.197258 | orchestrator | Saturday 28 February 2026 00:49:03 +0000 (0:00:00.293) 0:00:00.293 ***** 2026-02-28 00:56:23.197269 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.197277 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.197285 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.197292 | orchestrator | 2026-02-28 00:56:23.197298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:56:23.197305 | orchestrator | Saturday 28 February 2026 00:49:03 +0000 (0:00:00.481) 0:00:00.775 ***** 2026-02-28 00:56:23.197312 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-28 00:56:23.197319 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-28 00:56:23.197325 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-28 00:56:23.197331 | orchestrator | 2026-02-28 00:56:23.197338 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-28 00:56:23.197365 | orchestrator | 2026-02-28 00:56:23.197376 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-28 00:56:23.197386 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:00.501) 0:00:01.277 ***** 2026-02-28 00:56:23.197396 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.197407 | orchestrator | 2026-02-28 00:56:23.197426 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-28 00:56:23.197436 | orchestrator | Saturday 28 February 2026 00:49:04 +0000 (0:00:00.828) 0:00:02.105 ***** 2026-02-28 00:56:23.197447 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.197457 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.197467 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.197477 | orchestrator | 2026-02-28 00:56:23.197488 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-28 00:56:23.197498 | orchestrator | Saturday 28 February 2026 00:49:05 +0000 (0:00:00.704) 0:00:02.809 ***** 2026-02-28 00:56:23.197508 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.197519 | orchestrator | 2026-02-28 00:56:23.197529 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-28 00:56:23.197541 | orchestrator | Saturday 28 February 2026 00:49:06 +0000 (0:00:01.210) 0:00:04.020 ***** 2026-02-28 00:56:23.197602 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.197608 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.197614 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.197664 | orchestrator | 2026-02-28 00:56:23.197673 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-28 00:56:23.197679 | orchestrator | Saturday 28 February 2026 00:49:07 +0000 (0:00:00.648) 0:00:04.668 ***** 2026-02-28 00:56:23.197685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:23.197692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:23.197698 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:23.197704 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:23.197710 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:23.197717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-28 00:56:23.197723 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:56:23.197730 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:56:23.197787 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-28 00:56:23.197794 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:56:23.197800 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:56:23.197806 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-28 00:56:23.197813 | orchestrator | 2026-02-28 00:56:23.197819 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 00:56:23.197825 | orchestrator | Saturday 28 February 2026 00:49:10 +0000 (0:00:02.857) 0:00:07.525 ***** 2026-02-28 00:56:23.197832 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-28 00:56:23.197839 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-28 00:56:23.197845 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-28 00:56:23.197851 | orchestrator | 2026-02-28 00:56:23.197858 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 00:56:23.197864 | orchestrator | Saturday 28 February 2026 00:49:11 +0000 (0:00:00.889) 0:00:08.415 ***** 2026-02-28 00:56:23.197882 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-28 00:56:23.197888 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-28 00:56:23.197895 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-28 00:56:23.197901 | orchestrator | 2026-02-28 00:56:23.197907 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 00:56:23.197913 | orchestrator | Saturday 28 February 2026 00:49:13 +0000 (0:00:01.849) 0:00:10.264 ***** 2026-02-28 00:56:23.197920 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-28 00:56:23.197926 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.197946 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-28 00:56:23.197953 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.197959 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-28 00:56:23.197965 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.197972 | orchestrator | 2026-02-28 00:56:23.197978 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-28 00:56:23.197984 | orchestrator | Saturday 28 February 2026 00:49:14 +0000 (0:00:01.139) 0:00:11.403 ***** 2026-02-28 00:56:23.197992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.198115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.198125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.198132 | orchestrator | 2026-02-28 00:56:23.198139 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-28 00:56:23.198145 | orchestrator | Saturday 28 February 2026 00:49:16 +0000 (0:00:02.307) 0:00:13.711 ***** 2026-02-28 00:56:23.198152 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.198158 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.198164 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.198171 | orchestrator | 2026-02-28 00:56:23.198177 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-28 00:56:23.198183 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:01.886) 0:00:15.597 ***** 2026-02-28 00:56:23.198190 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-28 00:56:23.198196 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-28 00:56:23.198202 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-28 00:56:23.198209 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-28 00:56:23.198215 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-28 00:56:23.198221 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-28 00:56:23.198227 | orchestrator | 2026-02-28 00:56:23.198233 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-28 00:56:23.198240 | orchestrator | Saturday 28 February 2026 00:49:21 +0000 (0:00:03.349) 0:00:18.947 ***** 2026-02-28 00:56:23.198250 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.198282 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.198289 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.198296 | orchestrator | 2026-02-28 00:56:23.198302 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-28 00:56:23.198308 | orchestrator | Saturday 28 February 2026 00:49:24 +0000 (0:00:02.572) 0:00:21.519 ***** 2026-02-28 00:56:23.198315 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.198321 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.198327 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.198333 | orchestrator | 2026-02-28 00:56:23.198340 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-28 00:56:23.198346 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:03.192) 0:00:24.712 ***** 2026-02-28 00:56:23.198352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.198366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.198373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.198412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:23.198420 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.198427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.198438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.198445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.198451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:23.198458 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.198513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.198521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.198531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.198538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:23.198583 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.198590 | orchestrator | 2026-02-28 00:56:23.198597 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-28 00:56:23.198603 | orchestrator | Saturday 28 February 2026 00:49:30 +0000 (0:00:02.937) 0:00:27.650 ***** 2026-02-28 00:56:23.198610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.198666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:23.198677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.198697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:23.198723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.198756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52', '__omit_place_holder__29786c059df6965f601c257d6a2ad78202ca0a52'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-28 00:56:23.198773 | orchestrator | 2026-02-28 00:56:23.198784 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-28 00:56:23.198794 | orchestrator | Saturday 28 February 2026 00:49:34 +0000 (0:00:04.296) 0:00:31.946 ***** 2026-02-28 00:56:23.198821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.198996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.199009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.199047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.199058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.199068 | orchestrator | 2026-02-28 00:56:23.199079 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-28 00:56:23.199088 | orchestrator | Saturday 28 February 2026 00:49:39 +0000 (0:00:04.620) 0:00:36.567 ***** 2026-02-28 00:56:23.199099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:56:23.199110 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:56:23.199121 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-28 00:56:23.199132 | orchestrator | 2026-02-28 00:56:23.199142 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-28 00:56:23.199152 | orchestrator | Saturday 28 February 2026 00:49:44 +0000 (0:00:05.078) 0:00:41.645 ***** 2026-02-28 00:56:23.199162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:56:23.199172 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:56:23.199240 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-28 00:56:23.199252 | orchestrator | 2026-02-28 00:56:23.199278 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-28 00:56:23.199290 | orchestrator | Saturday 28 February 2026 00:49:50 +0000 (0:00:06.461) 0:00:48.107 ***** 2026-02-28 00:56:23.199301 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.199312 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.199322 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.199342 | orchestrator | 2026-02-28 00:56:23.199365 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-28 00:56:23.199376 | orchestrator | Saturday 28 February 2026 00:49:52 +0000 (0:00:01.197) 0:00:49.304 ***** 2026-02-28 00:56:23.199387 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:56:23.199400 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:56:23.199410 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-28 00:56:23.199419 | orchestrator | 2026-02-28 00:56:23.199429 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-28 00:56:23.199438 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:04.431) 0:00:53.736 ***** 2026-02-28 00:56:23.199454 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:56:23.199464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:56:23.199474 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-28 00:56:23.199484 | orchestrator | 2026-02-28 00:56:23.199494 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-28 00:56:23.199591 | orchestrator | Saturday 28 February 2026 00:50:00 +0000 (0:00:03.741) 0:00:57.478 ***** 2026-02-28 00:56:23.199607 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-28 00:56:23.199618 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-28 00:56:23.199629 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-28 00:56:23.199640 | orchestrator | 2026-02-28 00:56:23.199678 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-28 00:56:23.199690 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:02.060) 0:00:59.538 ***** 2026-02-28 00:56:23.199701 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-28 00:56:23.199712 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-28 00:56:23.199723 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-28 00:56:23.199734 | orchestrator | 2026-02-28 00:56:23.199745 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-28 00:56:23.199844 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:02.218) 0:01:01.757 ***** 2026-02-28 00:56:23.199862 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.199874 | orchestrator | 2026-02-28 00:56:23.199885 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-28 00:56:23.199897 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:01.405) 0:01:03.162 ***** 2026-02-28 00:56:23.199909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.199922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.199956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.199969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.199986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.199997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.200008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.200019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.200031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.200050 | orchestrator | 2026-02-28 00:56:23.200061 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-28 00:56:23.200072 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:04.909) 0:01:08.071 ***** 2026-02-28 00:56:23.200092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200135 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.200146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200189 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.200201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200250 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.200261 | orchestrator | 2026-02-28 00:56:23.200272 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-28 00:56:23.200282 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:03.578) 0:01:11.650 ***** 2026-02-28 00:56:23.200293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200385 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.200398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200442 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.200453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200526 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.200538 | orchestrator | 2026-02-28 00:56:23.200573 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-28 00:56:23.200723 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:03.334) 0:01:14.984 ***** 2026-02-28 00:56:23.200737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200786 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.200809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200854 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.200866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.200909 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.200922 | orchestrator | 2026-02-28 00:56:23.200934 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-28 00:56:23.200948 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:02.226) 0:01:17.211 ***** 2026-02-28 00:56:23.200960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.200979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.200992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201011 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.201022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201057 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.201076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201126 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.201138 | orchestrator | 2026-02-28 00:56:23.201150 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-28 00:56:23.201162 | orchestrator | Saturday 28 February 2026 00:50:21 +0000 (0:00:01.417) 0:01:18.628 ***** 2026-02-28 00:56:23.201173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.201224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201458 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.201470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201504 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.201515 | orchestrator | 2026-02-28 00:56:23.201526 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-28 00:56:23.201537 | orchestrator | Saturday 28 February 2026 00:50:22 +0000 (0:00:01.195) 0:01:19.824 ***** 2026-02-28 00:56:23.201571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201658 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.201669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201697 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.201708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201719 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.201730 | orchestrator | 2026-02-28 00:56:23.201747 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-28 00:56:23.201758 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:01.229) 0:01:21.054 ***** 2026-02-28 00:56:23.201774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201808 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.201819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201873 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.201890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.201932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.201944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.201954 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.201965 | orchestrator | 2026-02-28 00:56:23.201976 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-28 00:56:23.201987 | orchestrator | Saturday 28 February 2026 00:50:24 +0000 (0:00:01.061) 0:01:22.115 ***** 2026-02-28 00:56:23.201998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.206279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.206297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.206309 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.206371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.206396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.206408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.206419 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.206429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-28 00:56:23.206438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-28 00:56:23.206445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-28 00:56:23.206451 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.206457 | orchestrator | 2026-02-28 00:56:23.206465 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-28 00:56:23.206473 | orchestrator | Saturday 28 February 2026 00:50:25 +0000 (0:00:00.985) 0:01:23.101 ***** 2026-02-28 00:56:23.206496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:56:23.206503 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:56:23.206517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-28 00:56:23.206524 | orchestrator | 2026-02-28 00:56:23.206529 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-28 00:56:23.206535 | orchestrator | Saturday 28 February 2026 00:50:27 +0000 (0:00:01.854) 0:01:24.956 ***** 2026-02-28 00:56:23.206566 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:56:23.206573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:56:23.206579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-28 00:56:23.206585 | orchestrator | 2026-02-28 00:56:23.206591 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-28 00:56:23.206597 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:01.416) 0:01:26.372 ***** 2026-02-28 00:56:23.206602 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:56:23.206608 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:56:23.206614 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 00:56:23.206625 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:56:23.206631 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.206637 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:56:23.206643 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.206649 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 00:56:23.206655 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.206661 | orchestrator | 2026-02-28 00:56:23.206666 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-28 00:56:23.206672 | orchestrator | Saturday 28 February 2026 00:50:30 +0000 (0:00:01.185) 0:01:27.557 ***** 2026-02-28 00:56:23.206678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.206685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.206691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-28 00:56:23.206720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.206727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.206736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-28 00:56:23.206742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.206749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.206755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-28 00:56:23.206766 | orchestrator | 2026-02-28 00:56:23.206771 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-28 00:56:23.206777 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:02.980) 0:01:30.538 ***** 2026-02-28 00:56:23.206784 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.206789 | orchestrator | 2026-02-28 00:56:23.206795 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-28 00:56:23.206801 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:00.593) 0:01:31.131 ***** 2026-02-28 00:56:23.206808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-28 00:56:23.206820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.206830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-28 00:56:23.206853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.206859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-28 00:56:23.206895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.206902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206918 | orchestrator | 2026-02-28 00:56:23.206924 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-28 00:56:23.206930 | orchestrator | Saturday 28 February 2026 00:50:40 +0000 (0:00:06.286) 0:01:37.418 ***** 2026-02-28 00:56:23.206936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-28 00:56:23.206948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.206954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206970 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.206976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-28 00:56:23.206986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.206992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.206998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207004 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-28 00:56:23.207024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.207031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207098 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.207104 | orchestrator | 2026-02-28 00:56:23.207110 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-28 00:56:23.207116 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:01.779) 0:01:39.198 ***** 2026-02-28 00:56:23.207123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:23.207130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:23.207138 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.207144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:23.207150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:23.207156 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:23.207168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-28 00:56:23.207174 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.207180 | orchestrator | 2026-02-28 00:56:23.207190 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-28 00:56:23.207196 | orchestrator | Saturday 28 February 2026 00:50:43 +0000 (0:00:01.163) 0:01:40.361 ***** 2026-02-28 00:56:23.207202 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.207208 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.207214 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.207220 | orchestrator | 2026-02-28 00:56:23.207226 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-28 00:56:23.207232 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:01.561) 0:01:41.923 ***** 2026-02-28 00:56:23.207238 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.207243 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.207249 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.207255 | orchestrator | 2026-02-28 00:56:23.207261 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-28 00:56:23.207267 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:02.359) 0:01:44.282 ***** 2026-02-28 00:56:23.207273 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.207279 | orchestrator | 2026-02-28 00:56:23.207284 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-28 00:56:23.207294 | orchestrator | Saturday 28 February 2026 00:50:48 +0000 (0:00:01.668) 0:01:45.950 ***** 2026-02-28 00:56:23.207306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.207313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.207336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.207362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207374 | orchestrator | 2026-02-28 00:56:23.207380 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-28 00:56:23.207386 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:04.928) 0:01:50.878 ***** 2026-02-28 00:56:23.207397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.207403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207423 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.207430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.207436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207448 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.207474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.207486 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.207492 | orchestrator | 2026-02-28 00:56:23.207498 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-28 00:56:23.207504 | orchestrator | Saturday 28 February 2026 00:50:54 +0000 (0:00:01.047) 0:01:51.925 ***** 2026-02-28 00:56:23.207511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:23.207517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:23.207523 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.207529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:23.207535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:23.207581 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:23.207602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-28 00:56:23.207613 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.207619 | orchestrator | 2026-02-28 00:56:23.207625 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-28 00:56:23.207631 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:01.408) 0:01:53.333 ***** 2026-02-28 00:56:23.207637 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.207642 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.207653 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.207659 | orchestrator | 2026-02-28 00:56:23.207665 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-28 00:56:23.207671 | orchestrator | Saturday 28 February 2026 00:50:57 +0000 (0:00:01.540) 0:01:54.874 ***** 2026-02-28 00:56:23.207677 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.207683 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.207688 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.207694 | orchestrator | 2026-02-28 00:56:23.207706 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-28 00:56:23.207716 | orchestrator | Saturday 28 February 2026 00:51:00 +0000 (0:00:02.698) 0:01:57.573 ***** 2026-02-28 00:56:23.207726 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.207735 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207744 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.207759 | orchestrator | 2026-02-28 00:56:23.207768 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-28 00:56:23.207778 | orchestrator | Saturday 28 February 2026 00:51:00 +0000 (0:00:00.401) 0:01:57.974 ***** 2026-02-28 00:56:23.207787 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.207797 | orchestrator | 2026-02-28 00:56:23.207807 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-28 00:56:23.207816 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:01.426) 0:01:59.401 ***** 2026-02-28 00:56:23.207827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:56:23.207837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:56:23.207843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-28 00:56:23.207857 | orchestrator | 2026-02-28 00:56:23.207863 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-28 00:56:23.207869 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:04.303) 0:02:03.704 ***** 2026-02-28 00:56:23.207880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:56:23.207887 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.207911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:56:23.207918 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-28 00:56:23.207934 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.207940 | orchestrator | 2026-02-28 00:56:23.207945 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-28 00:56:23.207951 | orchestrator | Saturday 28 February 2026 00:51:09 +0000 (0:00:02.615) 0:02:06.320 ***** 2026-02-28 00:56:23.207958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:23.207967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:23.207979 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.207986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:23.207992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:23.207998 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:23.208014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-28 00:56:23.208020 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208026 | orchestrator | 2026-02-28 00:56:23.208032 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-28 00:56:23.208037 | orchestrator | Saturday 28 February 2026 00:51:12 +0000 (0:00:03.068) 0:02:09.389 ***** 2026-02-28 00:56:23.208043 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208049 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.208055 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208060 | orchestrator | 2026-02-28 00:56:23.208066 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-28 00:56:23.208072 | orchestrator | Saturday 28 February 2026 00:51:13 +0000 (0:00:00.765) 0:02:10.155 ***** 2026-02-28 00:56:23.208078 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208084 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.208093 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208099 | orchestrator | 2026-02-28 00:56:23.208105 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-28 00:56:23.208111 | orchestrator | Saturday 28 February 2026 00:51:14 +0000 (0:00:01.899) 0:02:12.054 ***** 2026-02-28 00:56:23.208117 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.208123 | orchestrator | 2026-02-28 00:56:23.208128 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-28 00:56:23.208134 | orchestrator | Saturday 28 February 2026 00:51:16 +0000 (0:00:01.281) 0:02:13.336 ***** 2026-02-28 00:56:23.208140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.208151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.208185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.208218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208244 | orchestrator | 2026-02-28 00:56:23.208253 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-28 00:56:23.208263 | orchestrator | Saturday 28 February 2026 00:51:23 +0000 (0:00:07.590) 0:02:20.927 ***** 2026-02-28 00:56:23.208272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.208281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208317 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.208348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208367 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.208377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.208388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208411 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208417 | orchestrator | 2026-02-28 00:56:23.208423 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-28 00:56:23.208429 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:02.281) 0:02:23.208 ***** 2026-02-28 00:56:23.208435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:23.208442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:23.208448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:23.208455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:23.208461 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208467 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:23.208642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-28 00:56:23.208657 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.208663 | orchestrator | 2026-02-28 00:56:23.208669 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-28 00:56:23.208675 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:01.671) 0:02:24.879 ***** 2026-02-28 00:56:23.208682 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.208687 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.208693 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.208705 | orchestrator | 2026-02-28 00:56:23.208711 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-28 00:56:23.208717 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:02.033) 0:02:26.912 ***** 2026-02-28 00:56:23.208723 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.208728 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.208734 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.208740 | orchestrator | 2026-02-28 00:56:23.208748 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-28 00:56:23.208758 | orchestrator | Saturday 28 February 2026 00:51:32 +0000 (0:00:02.645) 0:02:29.558 ***** 2026-02-28 00:56:23.208771 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208782 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.208793 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208802 | orchestrator | 2026-02-28 00:56:23.208822 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-28 00:56:23.208832 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:00.644) 0:02:30.203 ***** 2026-02-28 00:56:23.208841 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.208849 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.208857 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.208866 | orchestrator | 2026-02-28 00:56:23.208875 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-28 00:56:23.208884 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:00.397) 0:02:30.601 ***** 2026-02-28 00:56:23.208894 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.208903 | orchestrator | 2026-02-28 00:56:23.208913 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-28 00:56:23.208922 | orchestrator | Saturday 28 February 2026 00:51:34 +0000 (0:00:01.084) 0:02:31.685 ***** 2026-02-28 00:56:23.208932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 00:56:23.208944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:23.208955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.208999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 00:56:23.209012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:23.209025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 00:56:23.209066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:23.209080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209136 | orchestrator | 2026-02-28 00:56:23.209146 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-28 00:56:23.209155 | orchestrator | Saturday 28 February 2026 00:51:42 +0000 (0:00:07.464) 0:02:39.150 ***** 2026-02-28 00:56:23.209164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 00:56:23.209188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:23.209199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209242 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.209252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 00:56:23.209258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:23.209267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 00:56:23.209280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 00:56:23.209298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209342 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.209348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.209363 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.209370 | orchestrator | 2026-02-28 00:56:23.209375 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-28 00:56:23.209381 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:01.297) 0:02:40.447 ***** 2026-02-28 00:56:23.209388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:23.209394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:23.209401 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.209407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:23.209416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:23.209423 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.209428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:23.209434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-28 00:56:23.209440 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.209446 | orchestrator | 2026-02-28 00:56:23.209452 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-28 00:56:23.209457 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:02.759) 0:02:43.208 ***** 2026-02-28 00:56:23.209463 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.209469 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.209475 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.209488 | orchestrator | 2026-02-28 00:56:23.209494 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-28 00:56:23.209499 | orchestrator | Saturday 28 February 2026 00:51:49 +0000 (0:00:03.370) 0:02:46.578 ***** 2026-02-28 00:56:23.209505 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.209511 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.209517 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.209522 | orchestrator | 2026-02-28 00:56:23.209528 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-28 00:56:23.209534 | orchestrator | Saturday 28 February 2026 00:51:51 +0000 (0:00:02.136) 0:02:48.715 ***** 2026-02-28 00:56:23.209540 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.209575 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.209581 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.209586 | orchestrator | 2026-02-28 00:56:23.209592 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-28 00:56:23.209598 | orchestrator | Saturday 28 February 2026 00:51:52 +0000 (0:00:00.646) 0:02:49.362 ***** 2026-02-28 00:56:23.209604 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.209610 | orchestrator | 2026-02-28 00:56:23.209616 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-28 00:56:23.209621 | orchestrator | Saturday 28 February 2026 00:51:53 +0000 (0:00:00.834) 0:02:50.196 ***** 2026-02-28 00:56:23.209634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:56:23.209646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.209658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:56:23.209673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.209684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 00:56:23.209698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.209709 | orchestrator | 2026-02-28 00:56:23.209715 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-28 00:56:23.209721 | orchestrator | Saturday 28 February 2026 00:51:59 +0000 (0:00:06.380) 0:02:56.577 ***** 2026-02-28 00:56:23.209728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:56:23.209738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.209746 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.209755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:56:23.209769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.209780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 00:56:23.209790 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.209800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.209807 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.209813 | orchestrator | 2026-02-28 00:56:23.209819 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-28 00:56:23.209825 | orchestrator | Saturday 28 February 2026 00:52:03 +0000 (0:00:03.845) 0:03:00.423 ***** 2026-02-28 00:56:23.209831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:23.209841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:23.209851 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.209858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:23.209864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:23.209870 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.209876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:23.209883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-28 00:56:23.209889 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.209895 | orchestrator | 2026-02-28 00:56:23.209901 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-28 00:56:23.209907 | orchestrator | Saturday 28 February 2026 00:52:07 +0000 (0:00:03.882) 0:03:04.305 ***** 2026-02-28 00:56:23.209913 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.209919 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.209925 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.209930 | orchestrator | 2026-02-28 00:56:23.209936 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-28 00:56:23.209942 | orchestrator | Saturday 28 February 2026 00:52:08 +0000 (0:00:01.406) 0:03:05.712 ***** 2026-02-28 00:56:23.209948 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.209954 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.209960 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.209966 | orchestrator | 2026-02-28 00:56:23.209972 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-28 00:56:23.209981 | orchestrator | Saturday 28 February 2026 00:52:10 +0000 (0:00:02.276) 0:03:07.988 ***** 2026-02-28 00:56:23.209987 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.209998 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210004 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210009 | orchestrator | 2026-02-28 00:56:23.210039 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-28 00:56:23.210045 | orchestrator | Saturday 28 February 2026 00:52:11 +0000 (0:00:00.584) 0:03:08.573 ***** 2026-02-28 00:56:23.210051 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.210057 | orchestrator | 2026-02-28 00:56:23.210065 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-28 00:56:23.210071 | orchestrator | Saturday 28 February 2026 00:52:12 +0000 (0:00:00.897) 0:03:09.470 ***** 2026-02-28 00:56:23.210081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 00:56:23.210088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 00:56:23.210095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 00:56:23.210101 | orchestrator | 2026-02-28 00:56:23.210107 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-28 00:56:23.210113 | orchestrator | Saturday 28 February 2026 00:52:16 +0000 (0:00:03.766) 0:03:13.237 ***** 2026-02-28 00:56:23.210119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 00:56:23.210125 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 00:56:23.210152 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 00:56:23.210165 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210171 | orchestrator | 2026-02-28 00:56:23.210177 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-28 00:56:23.210183 | orchestrator | Saturday 28 February 2026 00:52:16 +0000 (0:00:00.774) 0:03:14.012 ***** 2026-02-28 00:56:23.210189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:23.210195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:23.210201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:23.210207 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:23.210219 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:23.210231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-28 00:56:23.210237 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210243 | orchestrator | 2026-02-28 00:56:23.210249 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-28 00:56:23.210255 | orchestrator | Saturday 28 February 2026 00:52:17 +0000 (0:00:00.914) 0:03:14.927 ***** 2026-02-28 00:56:23.210261 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.210267 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.210273 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.210279 | orchestrator | 2026-02-28 00:56:23.210285 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-28 00:56:23.210291 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:01.407) 0:03:16.334 ***** 2026-02-28 00:56:23.210297 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.210306 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.210316 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.210325 | orchestrator | 2026-02-28 00:56:23.210352 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-28 00:56:23.210362 | orchestrator | Saturday 28 February 2026 00:52:21 +0000 (0:00:02.208) 0:03:18.543 ***** 2026-02-28 00:56:23.210371 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210381 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210391 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210401 | orchestrator | 2026-02-28 00:56:23.210411 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-28 00:56:23.210421 | orchestrator | Saturday 28 February 2026 00:52:21 +0000 (0:00:00.580) 0:03:19.123 ***** 2026-02-28 00:56:23.210427 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.210433 | orchestrator | 2026-02-28 00:56:23.210439 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-28 00:56:23.210445 | orchestrator | Saturday 28 February 2026 00:52:22 +0000 (0:00:00.959) 0:03:20.082 ***** 2026-02-28 00:56:23.210485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:56:23.210493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:56:23.210515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 00:56:23.210523 | orchestrator | 2026-02-28 00:56:23.210529 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-28 00:56:23.210535 | orchestrator | Saturday 28 February 2026 00:52:26 +0000 (0:00:03.723) 0:03:23.806 ***** 2026-02-28 00:56:23.210562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:56:23.210574 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:56:23.210595 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 00:56:23.210612 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210665 | orchestrator | 2026-02-28 00:56:23.210673 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-28 00:56:23.210679 | orchestrator | Saturday 28 February 2026 00:52:27 +0000 (0:00:01.157) 0:03:24.964 ***** 2026-02-28 00:56:23.210686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:23.210696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:23.210705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:23.210712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:23.210718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:56:23.210730 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:23.210742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:23.210748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:23.210754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:23.210761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:56:23.210767 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:23.210783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:23.210789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-28 00:56:23.210796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-28 00:56:23.210802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-28 00:56:23.210808 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210814 | orchestrator | 2026-02-28 00:56:23.210823 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-28 00:56:23.210829 | orchestrator | Saturday 28 February 2026 00:52:28 +0000 (0:00:01.070) 0:03:26.034 ***** 2026-02-28 00:56:23.210836 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.210841 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.210847 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.210853 | orchestrator | 2026-02-28 00:56:23.210863 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-28 00:56:23.210870 | orchestrator | Saturday 28 February 2026 00:52:30 +0000 (0:00:01.443) 0:03:27.477 ***** 2026-02-28 00:56:23.210876 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.210882 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.210888 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.210893 | orchestrator | 2026-02-28 00:56:23.210899 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-28 00:56:23.210905 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:02.240) 0:03:29.718 ***** 2026-02-28 00:56:23.210912 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210917 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210924 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210930 | orchestrator | 2026-02-28 00:56:23.210936 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-28 00:56:23.210942 | orchestrator | Saturday 28 February 2026 00:52:32 +0000 (0:00:00.341) 0:03:30.060 ***** 2026-02-28 00:56:23.210948 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.210954 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.210959 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.210965 | orchestrator | 2026-02-28 00:56:23.210971 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-28 00:56:23.210977 | orchestrator | Saturday 28 February 2026 00:52:33 +0000 (0:00:00.605) 0:03:30.666 ***** 2026-02-28 00:56:23.210983 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.210989 | orchestrator | 2026-02-28 00:56:23.210995 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-28 00:56:23.211001 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:01.014) 0:03:31.680 ***** 2026-02-28 00:56:23.211008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 00:56:23.211018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:23.211026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:23.211044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 00:56:23.211051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:23.211058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:23.211065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 00:56:23.211075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:23.211089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:23.211095 | orchestrator | 2026-02-28 00:56:23.211105 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-28 00:56:23.211111 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:05.208) 0:03:36.888 ***** 2026-02-28 00:56:23.211118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 00:56:23.211124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:23.211130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:23.211137 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.211147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 00:56:23.211159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:23.211168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:23.211175 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.211181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 00:56:23.211188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 00:56:23.211194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 00:56:23.211200 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.211206 | orchestrator | 2026-02-28 00:56:23.211217 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-28 00:56:23.211226 | orchestrator | Saturday 28 February 2026 00:52:40 +0000 (0:00:00.775) 0:03:37.664 ***** 2026-02-28 00:56:23.211232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:23.211239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:23.211246 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.211252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:23.211261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:23.211267 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.211273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:23.211279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-28 00:56:23.211285 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.211291 | orchestrator | 2026-02-28 00:56:23.211297 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-28 00:56:23.211303 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:00.925) 0:03:38.589 ***** 2026-02-28 00:56:23.211309 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.211315 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.211321 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.211327 | orchestrator | 2026-02-28 00:56:23.211332 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-28 00:56:23.211338 | orchestrator | Saturday 28 February 2026 00:52:42 +0000 (0:00:01.526) 0:03:40.116 ***** 2026-02-28 00:56:23.211344 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.211350 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.211356 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.211362 | orchestrator | 2026-02-28 00:56:23.211368 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-28 00:56:23.211374 | orchestrator | Saturday 28 February 2026 00:52:45 +0000 (0:00:02.392) 0:03:42.508 ***** 2026-02-28 00:56:23.211380 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.211386 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.211392 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.211398 | orchestrator | 2026-02-28 00:56:23.211404 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-28 00:56:23.211409 | orchestrator | Saturday 28 February 2026 00:52:45 +0000 (0:00:00.615) 0:03:43.124 ***** 2026-02-28 00:56:23.211415 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.211421 | orchestrator | 2026-02-28 00:56:23.211427 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-28 00:56:23.211433 | orchestrator | Saturday 28 February 2026 00:52:46 +0000 (0:00:01.011) 0:03:44.136 ***** 2026-02-28 00:56:23.211443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 00:56:23.211455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 00:56:23.211472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 00:56:23.211478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211494 | orchestrator | 2026-02-28 00:56:23.211501 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-28 00:56:23.211507 | orchestrator | Saturday 28 February 2026 00:52:51 +0000 (0:00:04.114) 0:03:48.250 ***** 2026-02-28 00:56:23.211517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 00:56:23.211526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211533 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.211539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 00:56:23.211565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211580 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.211591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 00:56:23.211598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211604 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.211610 | orchestrator | 2026-02-28 00:56:23.211616 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-28 00:56:23.211622 | orchestrator | Saturday 28 February 2026 00:52:52 +0000 (0:00:01.171) 0:03:49.422 ***** 2026-02-28 00:56:23.211632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:23.211638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:23.211644 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.211650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:23.211656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:23.211662 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.211668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:23.211674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-28 00:56:23.211684 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.211690 | orchestrator | 2026-02-28 00:56:23.211696 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-28 00:56:23.211702 | orchestrator | Saturday 28 February 2026 00:52:53 +0000 (0:00:00.977) 0:03:50.400 ***** 2026-02-28 00:56:23.211708 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.211713 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.211719 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.211725 | orchestrator | 2026-02-28 00:56:23.211731 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-28 00:56:23.211737 | orchestrator | Saturday 28 February 2026 00:52:54 +0000 (0:00:01.311) 0:03:51.712 ***** 2026-02-28 00:56:23.211742 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.211748 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.211754 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.211759 | orchestrator | 2026-02-28 00:56:23.211765 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-28 00:56:23.211771 | orchestrator | Saturday 28 February 2026 00:52:56 +0000 (0:00:02.216) 0:03:53.928 ***** 2026-02-28 00:56:23.211777 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.211783 | orchestrator | 2026-02-28 00:56:23.211788 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-28 00:56:23.211794 | orchestrator | Saturday 28 February 2026 00:52:58 +0000 (0:00:01.379) 0:03:55.308 ***** 2026-02-28 00:56:23.211800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-28 00:56:23.211811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-28 00:56:23.211844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-28 00:56:23.211860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211901 | orchestrator | 2026-02-28 00:56:23.211907 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-28 00:56:23.211913 | orchestrator | Saturday 28 February 2026 00:53:01 +0000 (0:00:03.652) 0:03:58.960 ***** 2026-02-28 00:56:23.211923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-28 00:56:23.211929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211954 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.211961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-28 00:56:23.211967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.211982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data2026-02-28 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:23.212100 | orchestrator | ', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.212111 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-28 00:56:23.212134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.212140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.212147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.212153 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212159 | orchestrator | 2026-02-28 00:56:23.212165 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-28 00:56:23.212171 | orchestrator | Saturday 28 February 2026 00:53:02 +0000 (0:00:00.723) 0:03:59.683 ***** 2026-02-28 00:56:23.212177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:23.212183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:23.212189 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:23.212205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:23.212211 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:23.212227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-28 00:56:23.212233 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212239 | orchestrator | 2026-02-28 00:56:23.212245 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-28 00:56:23.212251 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:01.357) 0:04:01.040 ***** 2026-02-28 00:56:23.212256 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.212262 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.212268 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.212274 | orchestrator | 2026-02-28 00:56:23.212280 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-28 00:56:23.212289 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:01.438) 0:04:02.478 ***** 2026-02-28 00:56:23.212296 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.212301 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.212307 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.212313 | orchestrator | 2026-02-28 00:56:23.212319 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-28 00:56:23.212325 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:02.210) 0:04:04.688 ***** 2026-02-28 00:56:23.212330 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.212336 | orchestrator | 2026-02-28 00:56:23.212342 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-28 00:56:23.212348 | orchestrator | Saturday 28 February 2026 00:53:09 +0000 (0:00:01.522) 0:04:06.211 ***** 2026-02-28 00:56:23.212354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:56:23.212360 | orchestrator | 2026-02-28 00:56:23.212366 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-28 00:56:23.212371 | orchestrator | Saturday 28 February 2026 00:53:11 +0000 (0:00:02.892) 0:04:09.103 ***** 2026-02-28 00:56:23.212378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:23.212393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:23.212403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:23.212410 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:23.212423 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:23.212444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:23.212453 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212460 | orchestrator | 2026-02-28 00:56:23.212466 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-28 00:56:23.212471 | orchestrator | Saturday 28 February 2026 00:53:14 +0000 (0:00:02.369) 0:04:11.473 ***** 2026-02-28 00:56:23.212478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:23.212484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:23.212497 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:23.212518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:23.212524 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:56:23.212560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-28 00:56:23.212568 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212574 | orchestrator | 2026-02-28 00:56:23.212580 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-28 00:56:23.212586 | orchestrator | Saturday 28 February 2026 00:53:16 +0000 (0:00:02.532) 0:04:14.005 ***** 2026-02-28 00:56:23.212597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:23.212604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:23.212610 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:23.212623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:23.212633 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:23.212648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-28 00:56:23.212655 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212661 | orchestrator | 2026-02-28 00:56:23.212667 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-28 00:56:23.212672 | orchestrator | Saturday 28 February 2026 00:53:19 +0000 (0:00:02.899) 0:04:16.905 ***** 2026-02-28 00:56:23.212678 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.212684 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.212690 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.212696 | orchestrator | 2026-02-28 00:56:23.212702 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-28 00:56:23.212709 | orchestrator | Saturday 28 February 2026 00:53:21 +0000 (0:00:01.745) 0:04:18.651 ***** 2026-02-28 00:56:23.212716 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212722 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212729 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212736 | orchestrator | 2026-02-28 00:56:23.212743 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-28 00:56:23.212750 | orchestrator | Saturday 28 February 2026 00:53:22 +0000 (0:00:01.362) 0:04:20.013 ***** 2026-02-28 00:56:23.212757 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212765 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212775 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.212785 | orchestrator | 2026-02-28 00:56:23.212803 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-28 00:56:23.212817 | orchestrator | Saturday 28 February 2026 00:53:23 +0000 (0:00:00.277) 0:04:20.291 ***** 2026-02-28 00:56:23.212827 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.212837 | orchestrator | 2026-02-28 00:56:23.212847 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-28 00:56:23.212856 | orchestrator | Saturday 28 February 2026 00:53:24 +0000 (0:00:01.283) 0:04:21.575 ***** 2026-02-28 00:56:23.212865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:56:23.212884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:56:23.212895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-28 00:56:23.212905 | orchestrator | 2026-02-28 00:56:23.212915 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-28 00:56:23.212924 | orchestrator | Saturday 28 February 2026 00:53:25 +0000 (0:00:01.352) 0:04:22.928 ***** 2026-02-28 00:56:23.212940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:56:23.212951 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.212967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:56:23.212977 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.212989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-28 00:56:23.213002 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.213011 | orchestrator | 2026-02-28 00:56:23.213021 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-28 00:56:23.213031 | orchestrator | Saturday 28 February 2026 00:53:26 +0000 (0:00:00.379) 0:04:23.308 ***** 2026-02-28 00:56:23.213041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:56:23.213051 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.213062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:56:23.213074 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.213085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-28 00:56:23.213095 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.213106 | orchestrator | 2026-02-28 00:56:23.213116 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-28 00:56:23.213126 | orchestrator | Saturday 28 February 2026 00:53:26 +0000 (0:00:00.758) 0:04:24.066 ***** 2026-02-28 00:56:23.213135 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.213143 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.213153 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.213163 | orchestrator | 2026-02-28 00:56:23.213174 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-28 00:56:23.213184 | orchestrator | Saturday 28 February 2026 00:53:27 +0000 (0:00:00.422) 0:04:24.489 ***** 2026-02-28 00:56:23.213194 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.213205 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.213214 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.213225 | orchestrator | 2026-02-28 00:56:23.213234 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-28 00:56:23.213244 | orchestrator | Saturday 28 February 2026 00:53:28 +0000 (0:00:01.206) 0:04:25.695 ***** 2026-02-28 00:56:23.213253 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.213264 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.213270 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.213276 | orchestrator | 2026-02-28 00:56:23.213282 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-28 00:56:23.213293 | orchestrator | Saturday 28 February 2026 00:53:28 +0000 (0:00:00.314) 0:04:26.010 ***** 2026-02-28 00:56:23.213300 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.213306 | orchestrator | 2026-02-28 00:56:23.213312 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-28 00:56:23.213318 | orchestrator | Saturday 28 February 2026 00:53:30 +0000 (0:00:01.323) 0:04:27.334 ***** 2026-02-28 00:56:23.213330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 00:56:23.213343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:23.213374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 00:56:23.213399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.213456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:23.213480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.213522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.213529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 00:56:23.213617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.213631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:23.213705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.213726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.213740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.213775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.213798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.213943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.213954 | orchestrator | 2026-02-28 00:56:23.213961 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-28 00:56:23.213967 | orchestrator | Saturday 28 February 2026 00:53:34 +0000 (0:00:03.980) 0:04:31.314 ***** 2026-02-28 00:56:23.213978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 00:56:23.213986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.213999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:23.214115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 00:56:23.214140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.214230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:23.214255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 00:56:23.214380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.214490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.214517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.214523 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.214627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-28 00:56:23.214658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.214766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.214794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.214802 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.214809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-28 00:56:23.214836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.214843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-28 00:56:23.214893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-28 00:56:23.214901 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.214908 | orchestrator | 2026-02-28 00:56:23.214914 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-28 00:56:23.214921 | orchestrator | Saturday 28 February 2026 00:53:35 +0000 (0:00:01.334) 0:04:32.649 ***** 2026-02-28 00:56:23.214928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:23.214935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:23.214941 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.214947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:23.214958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:23.214965 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.214971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:23.214977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-28 00:56:23.214989 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.214996 | orchestrator | 2026-02-28 00:56:23.215002 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-28 00:56:23.215011 | orchestrator | Saturday 28 February 2026 00:53:37 +0000 (0:00:01.780) 0:04:34.429 ***** 2026-02-28 00:56:23.215021 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.215031 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.215041 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.215050 | orchestrator | 2026-02-28 00:56:23.215060 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-28 00:56:23.215071 | orchestrator | Saturday 28 February 2026 00:53:38 +0000 (0:00:01.226) 0:04:35.656 ***** 2026-02-28 00:56:23.215081 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.215091 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.215097 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.215103 | orchestrator | 2026-02-28 00:56:23.215113 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-28 00:56:23.215121 | orchestrator | Saturday 28 February 2026 00:53:40 +0000 (0:00:01.871) 0:04:37.527 ***** 2026-02-28 00:56:23.215127 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.215134 | orchestrator | 2026-02-28 00:56:23.215140 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-28 00:56:23.215146 | orchestrator | Saturday 28 February 2026 00:53:41 +0000 (0:00:01.124) 0:04:38.651 ***** 2026-02-28 00:56:23.215153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.215181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.215192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.215205 | orchestrator | 2026-02-28 00:56:23.215211 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-28 00:56:23.215218 | orchestrator | Saturday 28 February 2026 00:53:44 +0000 (0:00:03.125) 0:04:41.777 ***** 2026-02-28 00:56:23.215224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.215231 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.215238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.215245 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.215267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.215274 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.215280 | orchestrator | 2026-02-28 00:56:23.215287 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-28 00:56:23.215293 | orchestrator | Saturday 28 February 2026 00:53:45 +0000 (0:00:00.509) 0:04:42.286 ***** 2026-02-28 00:56:23.215300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:23.215311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:23.215321 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.215329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:23.215339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:23.215346 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.215352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:23.215359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-28 00:56:23.215367 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.215374 | orchestrator | 2026-02-28 00:56:23.215382 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-28 00:56:23.215389 | orchestrator | Saturday 28 February 2026 00:53:45 +0000 (0:00:00.839) 0:04:43.126 ***** 2026-02-28 00:56:23.215396 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.215403 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.215410 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.215417 | orchestrator | 2026-02-28 00:56:23.215425 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-28 00:56:23.215432 | orchestrator | Saturday 28 February 2026 00:53:48 +0000 (0:00:02.163) 0:04:45.290 ***** 2026-02-28 00:56:23.215440 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.215447 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.215454 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.215461 | orchestrator | 2026-02-28 00:56:23.215468 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-28 00:56:23.215476 | orchestrator | Saturday 28 February 2026 00:53:50 +0000 (0:00:01.934) 0:04:47.225 ***** 2026-02-28 00:56:23.215483 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.215491 | orchestrator | 2026-02-28 00:56:23.215498 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-28 00:56:23.215505 | orchestrator | Saturday 28 February 2026 00:53:51 +0000 (0:00:01.358) 0:04:48.583 ***** 2026-02-28 00:56:23.215514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.215698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.215729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.215807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215820 | orchestrator | 2026-02-28 00:56:23.215827 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-28 00:56:23.215834 | orchestrator | Saturday 28 February 2026 00:53:55 +0000 (0:00:03.884) 0:04:52.468 ***** 2026-02-28 00:56:23.215841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.215848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215884 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.215894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.215902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215915 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.215922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.215949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.215967 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.215973 | orchestrator | 2026-02-28 00:56:23.215980 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-28 00:56:23.215986 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:00.987) 0:04:53.455 ***** 2026-02-28 00:56:23.215993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216042 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216079 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-28 00:56:23.216112 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216118 | orchestrator | 2026-02-28 00:56:23.216141 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-28 00:56:23.216148 | orchestrator | Saturday 28 February 2026 00:53:57 +0000 (0:00:00.824) 0:04:54.279 ***** 2026-02-28 00:56:23.216155 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.216161 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.216167 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.216174 | orchestrator | 2026-02-28 00:56:23.216180 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-28 00:56:23.216186 | orchestrator | Saturday 28 February 2026 00:53:58 +0000 (0:00:01.264) 0:04:55.544 ***** 2026-02-28 00:56:23.216193 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.216199 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.216205 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.216212 | orchestrator | 2026-02-28 00:56:23.216218 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-28 00:56:23.216224 | orchestrator | Saturday 28 February 2026 00:54:00 +0000 (0:00:01.955) 0:04:57.500 ***** 2026-02-28 00:56:23.216231 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.216237 | orchestrator | 2026-02-28 00:56:23.216243 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-28 00:56:23.216250 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:01.431) 0:04:58.931 ***** 2026-02-28 00:56:23.216260 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-28 00:56:23.216266 | orchestrator | 2026-02-28 00:56:23.216273 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-28 00:56:23.216279 | orchestrator | Saturday 28 February 2026 00:54:02 +0000 (0:00:00.824) 0:04:59.755 ***** 2026-02-28 00:56:23.216286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:56:23.216293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:56:23.216304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-28 00:56:23.216311 | orchestrator | 2026-02-28 00:56:23.216317 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-28 00:56:23.216325 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:04.163) 0:05:03.919 ***** 2026-02-28 00:56:23.216331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216338 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216351 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216381 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216387 | orchestrator | 2026-02-28 00:56:23.216394 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-28 00:56:23.216400 | orchestrator | Saturday 28 February 2026 00:54:07 +0000 (0:00:01.022) 0:05:04.941 ***** 2026-02-28 00:56:23.216407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:23.216417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:23.216424 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:23.216442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:23.216449 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:23.216461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-28 00:56:23.216468 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216474 | orchestrator | 2026-02-28 00:56:23.216481 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:56:23.216487 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:01.490) 0:05:06.432 ***** 2026-02-28 00:56:23.216493 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.216500 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.216506 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.216512 | orchestrator | 2026-02-28 00:56:23.216519 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:56:23.216525 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:02.423) 0:05:08.855 ***** 2026-02-28 00:56:23.216531 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.216537 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.216563 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.216574 | orchestrator | 2026-02-28 00:56:23.216584 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-28 00:56:23.216592 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:02.777) 0:05:11.633 ***** 2026-02-28 00:56:23.216599 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-28 00:56:23.216605 | orchestrator | 2026-02-28 00:56:23.216611 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-28 00:56:23.216617 | orchestrator | Saturday 28 February 2026 00:54:15 +0000 (0:00:01.154) 0:05:12.788 ***** 2026-02-28 00:56:23.216624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216631 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216663 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216681 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216688 | orchestrator | 2026-02-28 00:56:23.216698 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-28 00:56:23.216704 | orchestrator | Saturday 28 February 2026 00:54:16 +0000 (0:00:01.143) 0:05:13.931 ***** 2026-02-28 00:56:23.216711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216717 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216730 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-28 00:56:23.216743 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216749 | orchestrator | 2026-02-28 00:56:23.216756 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-28 00:56:23.216762 | orchestrator | Saturday 28 February 2026 00:54:18 +0000 (0:00:01.330) 0:05:15.262 ***** 2026-02-28 00:56:23.216768 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216775 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216781 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216787 | orchestrator | 2026-02-28 00:56:23.216793 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:56:23.216800 | orchestrator | Saturday 28 February 2026 00:54:20 +0000 (0:00:02.056) 0:05:17.319 ***** 2026-02-28 00:56:23.216806 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.216813 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.216819 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.216825 | orchestrator | 2026-02-28 00:56:23.216832 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:56:23.216838 | orchestrator | Saturday 28 February 2026 00:54:22 +0000 (0:00:02.513) 0:05:19.832 ***** 2026-02-28 00:56:23.216844 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.216851 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.216857 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.216863 | orchestrator | 2026-02-28 00:56:23.216869 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-28 00:56:23.216883 | orchestrator | Saturday 28 February 2026 00:54:25 +0000 (0:00:02.695) 0:05:22.528 ***** 2026-02-28 00:56:23.216889 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-28 00:56:23.216896 | orchestrator | 2026-02-28 00:56:23.216902 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-28 00:56:23.216908 | orchestrator | Saturday 28 February 2026 00:54:26 +0000 (0:00:00.820) 0:05:23.348 ***** 2026-02-28 00:56:23.216930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:23.216938 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.216948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:23.216955 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.216961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:23.216968 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.216974 | orchestrator | 2026-02-28 00:56:23.216981 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-28 00:56:23.216987 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:01.188) 0:05:24.537 ***** 2026-02-28 00:56:23.216994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:23.217000 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.217007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:23.217013 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.217019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-28 00:56:23.217030 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.217037 | orchestrator | 2026-02-28 00:56:23.217043 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-28 00:56:23.217049 | orchestrator | Saturday 28 February 2026 00:54:28 +0000 (0:00:01.429) 0:05:25.966 ***** 2026-02-28 00:56:23.217056 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.217062 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.217068 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.217074 | orchestrator | 2026-02-28 00:56:23.217081 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-28 00:56:23.217087 | orchestrator | Saturday 28 February 2026 00:54:29 +0000 (0:00:01.178) 0:05:27.144 ***** 2026-02-28 00:56:23.217093 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.217114 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.217121 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.217127 | orchestrator | 2026-02-28 00:56:23.217134 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-28 00:56:23.217140 | orchestrator | Saturday 28 February 2026 00:54:32 +0000 (0:00:02.546) 0:05:29.690 ***** 2026-02-28 00:56:23.217146 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.217153 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.217159 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.217165 | orchestrator | 2026-02-28 00:56:23.217172 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-28 00:56:23.217178 | orchestrator | Saturday 28 February 2026 00:54:35 +0000 (0:00:03.049) 0:05:32.740 ***** 2026-02-28 00:56:23.217184 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.217191 | orchestrator | 2026-02-28 00:56:23.217197 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-28 00:56:23.217203 | orchestrator | Saturday 28 February 2026 00:54:37 +0000 (0:00:01.691) 0:05:34.431 ***** 2026-02-28 00:56:23.217214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.217221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:23.217228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.217268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.217279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:23.217286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.217311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.217332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:23.217340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.217364 | orchestrator | 2026-02-28 00:56:23.217370 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-28 00:56:23.217376 | orchestrator | Saturday 28 February 2026 00:54:40 +0000 (0:00:03.582) 0:05:38.014 ***** 2026-02-28 00:56:23.217383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.217389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:23.217467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.217497 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.217509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.217516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:23.217523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.217605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 00:56:23.217623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.217630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217637 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.217644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 00:56:23.217666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 00:56:23.217674 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.217680 | orchestrator | 2026-02-28 00:56:23.217691 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-28 00:56:23.217697 | orchestrator | Saturday 28 February 2026 00:54:41 +0000 (0:00:00.675) 0:05:38.689 ***** 2026-02-28 00:56:23.217704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:23.217710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:23.217717 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.217724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:23.217733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:23.217745 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.217751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:23.217757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-28 00:56:23.217764 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.217770 | orchestrator | 2026-02-28 00:56:23.217776 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-28 00:56:23.217783 | orchestrator | Saturday 28 February 2026 00:54:43 +0000 (0:00:01.506) 0:05:40.195 ***** 2026-02-28 00:56:23.217789 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.217795 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.217802 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.217808 | orchestrator | 2026-02-28 00:56:23.217814 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-28 00:56:23.217820 | orchestrator | Saturday 28 February 2026 00:54:44 +0000 (0:00:01.609) 0:05:41.805 ***** 2026-02-28 00:56:23.217826 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.217833 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.217839 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.217845 | orchestrator | 2026-02-28 00:56:23.217851 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-28 00:56:23.217858 | orchestrator | Saturday 28 February 2026 00:54:46 +0000 (0:00:02.290) 0:05:44.095 ***** 2026-02-28 00:56:23.217864 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.217870 | orchestrator | 2026-02-28 00:56:23.217876 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-28 00:56:23.217882 | orchestrator | Saturday 28 February 2026 00:54:48 +0000 (0:00:01.922) 0:05:46.018 ***** 2026-02-28 00:56:23.217890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:56:23.217912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:56:23.217923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:56:23.217936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:56:23.217944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:56:23.217966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:56:23.217974 | orchestrator | 2026-02-28 00:56:23.217980 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-28 00:56:23.217991 | orchestrator | Saturday 28 February 2026 00:54:54 +0000 (0:00:05.533) 0:05:51.551 ***** 2026-02-28 00:56:23.218001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:56:23.218008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:56:23.218039 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.218045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:56:23.218053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:56:23.218081 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.218089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:56:23.218100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:56:23.218108 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.218114 | orchestrator | 2026-02-28 00:56:23.218121 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-28 00:56:23.218127 | orchestrator | Saturday 28 February 2026 00:54:55 +0000 (0:00:00.704) 0:05:52.255 ***** 2026-02-28 00:56:23.218133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-28 00:56:23.218140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:23.218147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:23.218154 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.218160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-28 00:56:23.218167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:23.218173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:23.218180 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.218186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-28 00:56:23.218197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:23.218218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-28 00:56:23.218226 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.218232 | orchestrator | 2026-02-28 00:56:23.218239 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-28 00:56:23.218245 | orchestrator | Saturday 28 February 2026 00:54:56 +0000 (0:00:01.206) 0:05:53.462 ***** 2026-02-28 00:56:23.218251 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.218258 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.218264 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.218270 | orchestrator | 2026-02-28 00:56:23.218277 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-28 00:56:23.218283 | orchestrator | Saturday 28 February 2026 00:54:57 +0000 (0:00:00.911) 0:05:54.373 ***** 2026-02-28 00:56:23.218289 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.218296 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.218302 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.218308 | orchestrator | 2026-02-28 00:56:23.218315 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-28 00:56:23.218321 | orchestrator | Saturday 28 February 2026 00:54:58 +0000 (0:00:01.522) 0:05:55.896 ***** 2026-02-28 00:56:23.218343 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.218350 | orchestrator | 2026-02-28 00:56:23.218356 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-28 00:56:23.218363 | orchestrator | Saturday 28 February 2026 00:55:00 +0000 (0:00:01.501) 0:05:57.397 ***** 2026-02-28 00:56:23.218370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 00:56:23.218377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:23.218383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 00:56:23.218434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:23.218441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 00:56:23.218454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:23.218473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 00:56:23.218525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:23.218538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 00:56:23.218577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:23.218584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 00:56:23.218637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:23.218644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218670 | orchestrator | 2026-02-28 00:56:23.218677 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-28 00:56:23.218684 | orchestrator | Saturday 28 February 2026 00:55:05 +0000 (0:00:05.316) 0:06:02.714 ***** 2026-02-28 00:56:23.218694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 00:56:23.218701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:23.218710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 00:56:23.218742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:23.218752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218776 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.218783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 00:56:23.218793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:23.218800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 00:56:23.218826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 00:56:23.218846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 00:56:23.218853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:23.218870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 00:56:23.218911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218918 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.218928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-28 00:56:23.218935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 00:56:23.218955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 00:56:23.218962 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.218969 | orchestrator | 2026-02-28 00:56:23.218976 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-28 00:56:23.218982 | orchestrator | Saturday 28 February 2026 00:55:06 +0000 (0:00:01.033) 0:06:03.747 ***** 2026-02-28 00:56:23.218989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-28 00:56:23.218996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-28 00:56:23.219003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:23.219011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:23.219017 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-28 00:56:23.219030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-28 00:56:23.219037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:23.219043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:23.219050 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-28 00:56:23.219066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-28 00:56:23.219072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:23.219079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-28 00:56:23.219090 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219096 | orchestrator | 2026-02-28 00:56:23.219106 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-28 00:56:23.219113 | orchestrator | Saturday 28 February 2026 00:55:07 +0000 (0:00:01.165) 0:06:04.913 ***** 2026-02-28 00:56:23.219120 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219126 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219132 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219139 | orchestrator | 2026-02-28 00:56:23.219145 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-28 00:56:23.219151 | orchestrator | Saturday 28 February 2026 00:55:08 +0000 (0:00:00.542) 0:06:05.455 ***** 2026-02-28 00:56:23.219158 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219164 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219170 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219177 | orchestrator | 2026-02-28 00:56:23.219183 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-28 00:56:23.219189 | orchestrator | Saturday 28 February 2026 00:55:10 +0000 (0:00:01.741) 0:06:07.196 ***** 2026-02-28 00:56:23.219196 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.219202 | orchestrator | 2026-02-28 00:56:23.219208 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-28 00:56:23.219215 | orchestrator | Saturday 28 February 2026 00:55:11 +0000 (0:00:01.859) 0:06:09.056 ***** 2026-02-28 00:56:23.219221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:56:23.219229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:56:23.219239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-28 00:56:23.219251 | orchestrator | 2026-02-28 00:56:23.219258 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-28 00:56:23.219264 | orchestrator | Saturday 28 February 2026 00:55:14 +0000 (0:00:02.718) 0:06:11.774 ***** 2026-02-28 00:56:23.219274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:56:23.219281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:56:23.219288 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219294 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-28 00:56:23.219312 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219318 | orchestrator | 2026-02-28 00:56:23.219325 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-28 00:56:23.219334 | orchestrator | Saturday 28 February 2026 00:55:15 +0000 (0:00:00.837) 0:06:12.612 ***** 2026-02-28 00:56:23.219341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:56:23.219348 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:56:23.219361 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-28 00:56:23.219373 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219380 | orchestrator | 2026-02-28 00:56:23.219386 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-28 00:56:23.219392 | orchestrator | Saturday 28 February 2026 00:55:16 +0000 (0:00:00.705) 0:06:13.317 ***** 2026-02-28 00:56:23.219399 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219405 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219411 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219418 | orchestrator | 2026-02-28 00:56:23.219424 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-28 00:56:23.219433 | orchestrator | Saturday 28 February 2026 00:55:16 +0000 (0:00:00.512) 0:06:13.830 ***** 2026-02-28 00:56:23.219440 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219447 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219453 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219459 | orchestrator | 2026-02-28 00:56:23.219466 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-28 00:56:23.219472 | orchestrator | Saturday 28 February 2026 00:55:18 +0000 (0:00:01.524) 0:06:15.355 ***** 2026-02-28 00:56:23.219479 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:56:23.219485 | orchestrator | 2026-02-28 00:56:23.219491 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-28 00:56:23.219498 | orchestrator | Saturday 28 February 2026 00:55:20 +0000 (0:00:02.033) 0:06:17.388 ***** 2026-02-28 00:56:23.219504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.219512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.219526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.219537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.219563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.219574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-28 00:56:23.219587 | orchestrator | 2026-02-28 00:56:23.219594 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-28 00:56:23.219600 | orchestrator | Saturday 28 February 2026 00:55:27 +0000 (0:00:06.974) 0:06:24.363 ***** 2026-02-28 00:56:23.219610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.219617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.219627 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.219640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.219653 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.219669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-28 00:56:23.219676 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219682 | orchestrator | 2026-02-28 00:56:23.219689 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-28 00:56:23.219695 | orchestrator | Saturday 28 February 2026 00:55:27 +0000 (0:00:00.715) 0:06:25.079 ***** 2026-02-28 00:56:23.219704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219730 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219767 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-28 00:56:23.219799 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219805 | orchestrator | 2026-02-28 00:56:23.219811 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-28 00:56:23.219818 | orchestrator | Saturday 28 February 2026 00:55:29 +0000 (0:00:01.892) 0:06:26.971 ***** 2026-02-28 00:56:23.219824 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.219830 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.219836 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.219843 | orchestrator | 2026-02-28 00:56:23.219849 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-28 00:56:23.219858 | orchestrator | Saturday 28 February 2026 00:55:31 +0000 (0:00:01.494) 0:06:28.466 ***** 2026-02-28 00:56:23.219865 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.219871 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.219877 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.219884 | orchestrator | 2026-02-28 00:56:23.219890 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-28 00:56:23.219896 | orchestrator | Saturday 28 February 2026 00:55:33 +0000 (0:00:02.461) 0:06:30.927 ***** 2026-02-28 00:56:23.219903 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219909 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219915 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219921 | orchestrator | 2026-02-28 00:56:23.219928 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-28 00:56:23.219934 | orchestrator | Saturday 28 February 2026 00:55:34 +0000 (0:00:00.370) 0:06:31.297 ***** 2026-02-28 00:56:23.219940 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219946 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219953 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219959 | orchestrator | 2026-02-28 00:56:23.219965 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-28 00:56:23.219971 | orchestrator | Saturday 28 February 2026 00:55:34 +0000 (0:00:00.382) 0:06:31.680 ***** 2026-02-28 00:56:23.219977 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.219984 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.219990 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.219996 | orchestrator | 2026-02-28 00:56:23.220005 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-28 00:56:23.220016 | orchestrator | Saturday 28 February 2026 00:55:35 +0000 (0:00:00.718) 0:06:32.399 ***** 2026-02-28 00:56:23.220022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220028 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220035 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220041 | orchestrator | 2026-02-28 00:56:23.220047 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-28 00:56:23.220053 | orchestrator | Saturday 28 February 2026 00:55:35 +0000 (0:00:00.376) 0:06:32.775 ***** 2026-02-28 00:56:23.220060 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220066 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220072 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220078 | orchestrator | 2026-02-28 00:56:23.220085 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-28 00:56:23.220091 | orchestrator | Saturday 28 February 2026 00:55:36 +0000 (0:00:00.446) 0:06:33.222 ***** 2026-02-28 00:56:23.220097 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220103 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220109 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220116 | orchestrator | 2026-02-28 00:56:23.220122 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-28 00:56:23.220128 | orchestrator | Saturday 28 February 2026 00:55:37 +0000 (0:00:00.950) 0:06:34.173 ***** 2026-02-28 00:56:23.220134 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220141 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220147 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220153 | orchestrator | 2026-02-28 00:56:23.220159 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-28 00:56:23.220166 | orchestrator | Saturday 28 February 2026 00:55:37 +0000 (0:00:00.761) 0:06:34.934 ***** 2026-02-28 00:56:23.220172 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220178 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220184 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220191 | orchestrator | 2026-02-28 00:56:23.220197 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-28 00:56:23.220203 | orchestrator | Saturday 28 February 2026 00:55:38 +0000 (0:00:00.383) 0:06:35.318 ***** 2026-02-28 00:56:23.220210 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220216 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220222 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220228 | orchestrator | 2026-02-28 00:56:23.220234 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-28 00:56:23.220241 | orchestrator | Saturday 28 February 2026 00:55:39 +0000 (0:00:00.988) 0:06:36.306 ***** 2026-02-28 00:56:23.220247 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220253 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220260 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220266 | orchestrator | 2026-02-28 00:56:23.220272 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-28 00:56:23.220279 | orchestrator | Saturday 28 February 2026 00:55:40 +0000 (0:00:01.433) 0:06:37.739 ***** 2026-02-28 00:56:23.220285 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220291 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220297 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220304 | orchestrator | 2026-02-28 00:56:23.220310 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-28 00:56:23.220317 | orchestrator | Saturday 28 February 2026 00:55:41 +0000 (0:00:00.990) 0:06:38.729 ***** 2026-02-28 00:56:23.220323 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.220329 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.220335 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.220342 | orchestrator | 2026-02-28 00:56:23.220348 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-28 00:56:23.220354 | orchestrator | Saturday 28 February 2026 00:55:51 +0000 (0:00:10.116) 0:06:48.846 ***** 2026-02-28 00:56:23.220364 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220371 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220377 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220383 | orchestrator | 2026-02-28 00:56:23.220389 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-28 00:56:23.220395 | orchestrator | Saturday 28 February 2026 00:55:52 +0000 (0:00:00.799) 0:06:49.646 ***** 2026-02-28 00:56:23.220402 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.220408 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.220414 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.220420 | orchestrator | 2026-02-28 00:56:23.220427 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-28 00:56:23.220433 | orchestrator | Saturday 28 February 2026 00:56:03 +0000 (0:00:11.282) 0:07:00.929 ***** 2026-02-28 00:56:23.220439 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220449 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220456 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220462 | orchestrator | 2026-02-28 00:56:23.220468 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-28 00:56:23.220474 | orchestrator | Saturday 28 February 2026 00:56:07 +0000 (0:00:03.809) 0:07:04.739 ***** 2026-02-28 00:56:23.220480 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:56:23.220487 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:56:23.220493 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:56:23.220499 | orchestrator | 2026-02-28 00:56:23.220505 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-28 00:56:23.220512 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:04.984) 0:07:09.723 ***** 2026-02-28 00:56:23.220518 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220524 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220530 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220537 | orchestrator | 2026-02-28 00:56:23.220560 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-28 00:56:23.220567 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:00.414) 0:07:10.137 ***** 2026-02-28 00:56:23.220573 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220579 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220586 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220592 | orchestrator | 2026-02-28 00:56:23.220598 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-28 00:56:23.220608 | orchestrator | Saturday 28 February 2026 00:56:13 +0000 (0:00:00.750) 0:07:10.887 ***** 2026-02-28 00:56:23.220614 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220620 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220626 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220633 | orchestrator | 2026-02-28 00:56:23.220639 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-28 00:56:23.220645 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:00.428) 0:07:11.316 ***** 2026-02-28 00:56:23.220651 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220658 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220664 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220670 | orchestrator | 2026-02-28 00:56:23.220676 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-28 00:56:23.220682 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:00.399) 0:07:11.715 ***** 2026-02-28 00:56:23.220689 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220695 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220701 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220707 | orchestrator | 2026-02-28 00:56:23.220714 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-28 00:56:23.220720 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:00.382) 0:07:12.098 ***** 2026-02-28 00:56:23.220726 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:56:23.220737 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:56:23.220743 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:56:23.220749 | orchestrator | 2026-02-28 00:56:23.220756 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-28 00:56:23.220762 | orchestrator | Saturday 28 February 2026 00:56:15 +0000 (0:00:00.438) 0:07:12.537 ***** 2026-02-28 00:56:23.220769 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220775 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220781 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220787 | orchestrator | 2026-02-28 00:56:23.220793 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-28 00:56:23.220800 | orchestrator | Saturday 28 February 2026 00:56:20 +0000 (0:00:05.260) 0:07:17.798 ***** 2026-02-28 00:56:23.220806 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:56:23.220812 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:56:23.220818 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:56:23.220824 | orchestrator | 2026-02-28 00:56:23.220831 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:56:23.220837 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-28 00:56:23.220844 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-28 00:56:23.220850 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-28 00:56:23.220856 | orchestrator | 2026-02-28 00:56:23.220863 | orchestrator | 2026-02-28 00:56:23.220869 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:56:23.220875 | orchestrator | Saturday 28 February 2026 00:56:21 +0000 (0:00:00.886) 0:07:18.685 ***** 2026-02-28 00:56:23.220881 | orchestrator | =============================================================================== 2026-02-28 00:56:23.220887 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.28s 2026-02-28 00:56:23.220894 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.12s 2026-02-28 00:56:23.220900 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.59s 2026-02-28 00:56:23.220906 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 7.46s 2026-02-28 00:56:23.220912 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.97s 2026-02-28 00:56:23.220918 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.46s 2026-02-28 00:56:23.220925 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.38s 2026-02-28 00:56:23.220931 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.29s 2026-02-28 00:56:23.220937 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.53s 2026-02-28 00:56:23.220948 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.32s 2026-02-28 00:56:23.220954 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.26s 2026-02-28 00:56:23.220960 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.21s 2026-02-28 00:56:23.220967 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 5.08s 2026-02-28 00:56:23.220973 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.98s 2026-02-28 00:56:23.220979 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.93s 2026-02-28 00:56:23.220985 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.91s 2026-02-28 00:56:23.220992 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.62s 2026-02-28 00:56:23.220998 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.43s 2026-02-28 00:56:23.221009 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.30s 2026-02-28 00:56:23.221015 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.30s 2026-02-28 00:56:26.272162 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:26.277777 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:26.280675 | orchestrator | 2026-02-28 00:56:26 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:26.280926 | orchestrator | 2026-02-28 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:29.319244 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:29.320312 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:29.321290 | orchestrator | 2026-02-28 00:56:29 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:29.321521 | orchestrator | 2026-02-28 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:32.357073 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:32.357525 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:32.358810 | orchestrator | 2026-02-28 00:56:32 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:32.358897 | orchestrator | 2026-02-28 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:35.409650 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:35.409873 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:35.409892 | orchestrator | 2026-02-28 00:56:35 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:35.409904 | orchestrator | 2026-02-28 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:38.435263 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:38.437217 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:38.440172 | orchestrator | 2026-02-28 00:56:38 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:38.440207 | orchestrator | 2026-02-28 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:41.476441 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:41.476903 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:41.477891 | orchestrator | 2026-02-28 00:56:41 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:41.477941 | orchestrator | 2026-02-28 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:44.523808 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:44.524622 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:44.525449 | orchestrator | 2026-02-28 00:56:44 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:44.525605 | orchestrator | 2026-02-28 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:47.597762 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:47.600610 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:47.602461 | orchestrator | 2026-02-28 00:56:47 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:47.602627 | orchestrator | 2026-02-28 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:50.649877 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:50.654245 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:50.656207 | orchestrator | 2026-02-28 00:56:50 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:50.656374 | orchestrator | 2026-02-28 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:53.701384 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:53.702426 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:53.706896 | orchestrator | 2026-02-28 00:56:53 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:53.706939 | orchestrator | 2026-02-28 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:56.772597 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:56.772730 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:56.775802 | orchestrator | 2026-02-28 00:56:56 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:56.775836 | orchestrator | 2026-02-28 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:56:59.848218 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:56:59.849152 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:56:59.851804 | orchestrator | 2026-02-28 00:56:59 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:56:59.851839 | orchestrator | 2026-02-28 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:02.893264 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:02.894086 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:02.895373 | orchestrator | 2026-02-28 00:57:02 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:02.895412 | orchestrator | 2026-02-28 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:05.935375 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:05.937680 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:05.939040 | orchestrator | 2026-02-28 00:57:05 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:05.939277 | orchestrator | 2026-02-28 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:08.985446 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:08.987892 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:08.990270 | orchestrator | 2026-02-28 00:57:08 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:08.990309 | orchestrator | 2026-02-28 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:12.038126 | orchestrator | 2026-02-28 00:57:12 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:12.039019 | orchestrator | 2026-02-28 00:57:12 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:12.040811 | orchestrator | 2026-02-28 00:57:12 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:12.040885 | orchestrator | 2026-02-28 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:15.079668 | orchestrator | 2026-02-28 00:57:15 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:15.080983 | orchestrator | 2026-02-28 00:57:15 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:15.082396 | orchestrator | 2026-02-28 00:57:15 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:15.082448 | orchestrator | 2026-02-28 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:18.118245 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:18.119548 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:18.121883 | orchestrator | 2026-02-28 00:57:18 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:18.121925 | orchestrator | 2026-02-28 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:21.153067 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:21.155525 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:21.158142 | orchestrator | 2026-02-28 00:57:21 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:21.158214 | orchestrator | 2026-02-28 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:24.192094 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:24.192868 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:24.194545 | orchestrator | 2026-02-28 00:57:24 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:24.194634 | orchestrator | 2026-02-28 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:27.252679 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:27.257575 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:27.260080 | orchestrator | 2026-02-28 00:57:27 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:27.260267 | orchestrator | 2026-02-28 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:30.311647 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:30.312974 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:30.316021 | orchestrator | 2026-02-28 00:57:30 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:30.316095 | orchestrator | 2026-02-28 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:33.362410 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:33.363967 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:33.365938 | orchestrator | 2026-02-28 00:57:33 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:33.366088 | orchestrator | 2026-02-28 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:36.406686 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:36.409086 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:36.410718 | orchestrator | 2026-02-28 00:57:36 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:36.410748 | orchestrator | 2026-02-28 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:39.458710 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:39.461142 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:39.463431 | orchestrator | 2026-02-28 00:57:39 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:39.463482 | orchestrator | 2026-02-28 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:42.514562 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:42.516830 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:42.518927 | orchestrator | 2026-02-28 00:57:42 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:42.519081 | orchestrator | 2026-02-28 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:45.568673 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:45.569847 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:45.571939 | orchestrator | 2026-02-28 00:57:45 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:45.572695 | orchestrator | 2026-02-28 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:48.623085 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:48.624674 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:48.626898 | orchestrator | 2026-02-28 00:57:48 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:48.626959 | orchestrator | 2026-02-28 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:51.667624 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:51.671075 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:51.673741 | orchestrator | 2026-02-28 00:57:51 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:51.673801 | orchestrator | 2026-02-28 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:54.727452 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:54.730776 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:54.733338 | orchestrator | 2026-02-28 00:57:54 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:54.733434 | orchestrator | 2026-02-28 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:57:57.769989 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:57:57.771397 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:57:57.773102 | orchestrator | 2026-02-28 00:57:57 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:57:57.773136 | orchestrator | 2026-02-28 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:00.823566 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:00.825792 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:00.827785 | orchestrator | 2026-02-28 00:58:00 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:00.828514 | orchestrator | 2026-02-28 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:03.871152 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:03.872282 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:03.873193 | orchestrator | 2026-02-28 00:58:03 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:03.873212 | orchestrator | 2026-02-28 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:06.914229 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:06.915381 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:06.916642 | orchestrator | 2026-02-28 00:58:06 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:06.916668 | orchestrator | 2026-02-28 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:09.954189 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:09.955704 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:09.957238 | orchestrator | 2026-02-28 00:58:09 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:09.957295 | orchestrator | 2026-02-28 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:12.996442 | orchestrator | 2026-02-28 00:58:12 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:12.997781 | orchestrator | 2026-02-28 00:58:12 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:12.998741 | orchestrator | 2026-02-28 00:58:12 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:12.998774 | orchestrator | 2026-02-28 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:16.029991 | orchestrator | 2026-02-28 00:58:16 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:16.030854 | orchestrator | 2026-02-28 00:58:16 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:16.031905 | orchestrator | 2026-02-28 00:58:16 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:16.031966 | orchestrator | 2026-02-28 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:19.057198 | orchestrator | 2026-02-28 00:58:19 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:19.058119 | orchestrator | 2026-02-28 00:58:19 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:19.058871 | orchestrator | 2026-02-28 00:58:19 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:19.058905 | orchestrator | 2026-02-28 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:22.090374 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:22.090663 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:22.092517 | orchestrator | 2026-02-28 00:58:22 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:22.092543 | orchestrator | 2026-02-28 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:25.131844 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:25.132493 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:25.134414 | orchestrator | 2026-02-28 00:58:25 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:25.134450 | orchestrator | 2026-02-28 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:28.176587 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:28.177772 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:28.180441 | orchestrator | 2026-02-28 00:58:28 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:28.180780 | orchestrator | 2026-02-28 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:31.214722 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:31.215970 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:31.217565 | orchestrator | 2026-02-28 00:58:31 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:31.217632 | orchestrator | 2026-02-28 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:34.260929 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:34.261054 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:34.262298 | orchestrator | 2026-02-28 00:58:34 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:34.262495 | orchestrator | 2026-02-28 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:37.315759 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:37.316807 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:37.318000 | orchestrator | 2026-02-28 00:58:37 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:37.318091 | orchestrator | 2026-02-28 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:40.369009 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:40.370006 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:40.372375 | orchestrator | 2026-02-28 00:58:40 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:40.372425 | orchestrator | 2026-02-28 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:43.410591 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state STARTED 2026-02-28 00:58:43.410965 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:43.416148 | orchestrator | 2026-02-28 00:58:43 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:43.416238 | orchestrator | 2026-02-28 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:46.462707 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task be0375c1-f13a-415e-8551-d32eeb9be016 is in state SUCCESS 2026-02-28 00:58:46.464277 | orchestrator | 2026-02-28 00:58:46.464461 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 00:58:46.464479 | orchestrator | 2.16.14 2026-02-28 00:58:46.464493 | orchestrator | 2026-02-28 00:58:46.464503 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-28 00:58:46.464514 | orchestrator | 2026-02-28 00:58:46.464524 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-28 00:58:46.464535 | orchestrator | Saturday 28 February 2026 00:46:24 +0000 (0:00:00.746) 0:00:00.746 ***** 2026-02-28 00:58:46.464546 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.464557 | orchestrator | 2026-02-28 00:58:46.464568 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-28 00:58:46.464581 | orchestrator | Saturday 28 February 2026 00:46:25 +0000 (0:00:01.012) 0:00:01.758 ***** 2026-02-28 00:58:46.464598 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.464650 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.464671 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.464687 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.464702 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.464717 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.464733 | orchestrator | 2026-02-28 00:58:46.464748 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-28 00:58:46.464765 | orchestrator | Saturday 28 February 2026 00:46:27 +0000 (0:00:01.864) 0:00:03.623 ***** 2026-02-28 00:58:46.464782 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.464798 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.464815 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.464826 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.464836 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.464846 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.464882 | orchestrator | 2026-02-28 00:58:46.464893 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-28 00:58:46.464909 | orchestrator | Saturday 28 February 2026 00:46:27 +0000 (0:00:00.791) 0:00:04.414 ***** 2026-02-28 00:58:46.464926 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.464942 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.464957 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.464972 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.464988 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.465004 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.465020 | orchestrator | 2026-02-28 00:58:46.465037 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-28 00:58:46.465052 | orchestrator | Saturday 28 February 2026 00:46:28 +0000 (0:00:00.958) 0:00:05.373 ***** 2026-02-28 00:58:46.465067 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.465082 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.465098 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.465114 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.465128 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.465145 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.465161 | orchestrator | 2026-02-28 00:58:46.465179 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-28 00:58:46.465198 | orchestrator | Saturday 28 February 2026 00:46:29 +0000 (0:00:00.783) 0:00:06.156 ***** 2026-02-28 00:58:46.465214 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.465231 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.465247 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.465264 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.465281 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.465298 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.465309 | orchestrator | 2026-02-28 00:58:46.465318 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-28 00:58:46.465329 | orchestrator | Saturday 28 February 2026 00:46:30 +0000 (0:00:00.699) 0:00:06.855 ***** 2026-02-28 00:58:46.465339 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.465348 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.465358 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.465368 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.465377 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.465387 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.465397 | orchestrator | 2026-02-28 00:58:46.465407 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-28 00:58:46.465418 | orchestrator | Saturday 28 February 2026 00:46:32 +0000 (0:00:02.164) 0:00:09.020 ***** 2026-02-28 00:58:46.465428 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.465438 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.465448 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.465457 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.465467 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.465477 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.465486 | orchestrator | 2026-02-28 00:58:46.465496 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-28 00:58:46.465506 | orchestrator | Saturday 28 February 2026 00:46:33 +0000 (0:00:01.167) 0:00:10.188 ***** 2026-02-28 00:58:46.465516 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.465525 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.465535 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.465545 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.465554 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.465564 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.465640 | orchestrator | 2026-02-28 00:58:46.465654 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-28 00:58:46.465670 | orchestrator | Saturday 28 February 2026 00:46:34 +0000 (0:00:00.870) 0:00:11.058 ***** 2026-02-28 00:58:46.465686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:46.465735 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:46.465754 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:46.465771 | orchestrator | 2026-02-28 00:58:46.465788 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-28 00:58:46.465805 | orchestrator | Saturday 28 February 2026 00:46:35 +0000 (0:00:00.617) 0:00:11.676 ***** 2026-02-28 00:58:46.465822 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.465839 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.465855 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.465897 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.465915 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.465932 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.465948 | orchestrator | 2026-02-28 00:58:46.465965 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-28 00:58:46.466300 | orchestrator | Saturday 28 February 2026 00:46:36 +0000 (0:00:01.461) 0:00:13.137 ***** 2026-02-28 00:58:46.466325 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:46.466335 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:46.466345 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:46.466355 | orchestrator | 2026-02-28 00:58:46.466365 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-28 00:58:46.466375 | orchestrator | Saturday 28 February 2026 00:46:39 +0000 (0:00:03.232) 0:00:16.370 ***** 2026-02-28 00:58:46.466385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:58:46.466396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:58:46.466406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:58:46.466416 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.466426 | orchestrator | 2026-02-28 00:58:46.466436 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-28 00:58:46.466446 | orchestrator | Saturday 28 February 2026 00:46:40 +0000 (0:00:00.872) 0:00:17.242 ***** 2026-02-28 00:58:46.466457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466491 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.466501 | orchestrator | 2026-02-28 00:58:46.466511 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-28 00:58:46.466521 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:01.350) 0:00:18.593 ***** 2026-02-28 00:58:46.466532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466546 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466578 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.466588 | orchestrator | 2026-02-28 00:58:46.466598 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-28 00:58:46.466695 | orchestrator | Saturday 28 February 2026 00:46:42 +0000 (0:00:00.752) 0:00:19.345 ***** 2026-02-28 00:58:46.466730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-28 00:46:37.414326', 'end': '2026-02-28 00:46:37.507756', 'delta': '0:00:00.093430', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-28 00:46:38.726611', 'end': '2026-02-28 00:46:38.834339', 'delta': '0:00:00.107728', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466826 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-28 00:46:39.709559', 'end': '2026-02-28 00:46:39.796084', 'delta': '0:00:00.086525', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.466845 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.466904 | orchestrator | 2026-02-28 00:58:46.466917 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-28 00:58:46.466926 | orchestrator | Saturday 28 February 2026 00:46:43 +0000 (0:00:00.456) 0:00:19.802 ***** 2026-02-28 00:58:46.466936 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.466946 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.466956 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.466966 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.466975 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.466985 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.466995 | orchestrator | 2026-02-28 00:58:46.467005 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-28 00:58:46.467025 | orchestrator | Saturday 28 February 2026 00:46:46 +0000 (0:00:02.854) 0:00:22.656 ***** 2026-02-28 00:58:46.467035 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.467045 | orchestrator | 2026-02-28 00:58:46.467055 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-28 00:58:46.467064 | orchestrator | Saturday 28 February 2026 00:46:47 +0000 (0:00:00.965) 0:00:23.621 ***** 2026-02-28 00:58:46.467336 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467348 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.467358 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.467368 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.467378 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.467387 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.467397 | orchestrator | 2026-02-28 00:58:46.467407 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-28 00:58:46.467417 | orchestrator | Saturday 28 February 2026 00:46:51 +0000 (0:00:04.692) 0:00:28.314 ***** 2026-02-28 00:58:46.467426 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.467436 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467446 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.467455 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.467465 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.467475 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.467484 | orchestrator | 2026-02-28 00:58:46.467494 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 00:58:46.467504 | orchestrator | Saturday 28 February 2026 00:46:54 +0000 (0:00:02.728) 0:00:31.043 ***** 2026-02-28 00:58:46.467514 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467523 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.467533 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.467543 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.467553 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.467563 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.467572 | orchestrator | 2026-02-28 00:58:46.467582 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-28 00:58:46.467592 | orchestrator | Saturday 28 February 2026 00:46:56 +0000 (0:00:02.350) 0:00:33.393 ***** 2026-02-28 00:58:46.467602 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467643 | orchestrator | 2026-02-28 00:58:46.467664 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-28 00:58:46.467674 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:00.248) 0:00:33.642 ***** 2026-02-28 00:58:46.467683 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467693 | orchestrator | 2026-02-28 00:58:46.467703 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 00:58:46.467713 | orchestrator | Saturday 28 February 2026 00:46:57 +0000 (0:00:00.422) 0:00:34.065 ***** 2026-02-28 00:58:46.467784 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467797 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.467807 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.467831 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.467849 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.467866 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.467883 | orchestrator | 2026-02-28 00:58:46.467899 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-28 00:58:46.467915 | orchestrator | Saturday 28 February 2026 00:46:58 +0000 (0:00:00.988) 0:00:35.053 ***** 2026-02-28 00:58:46.467931 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.467948 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.467966 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.467982 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.468000 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.468030 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.468047 | orchestrator | 2026-02-28 00:58:46.468331 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-28 00:58:46.468351 | orchestrator | Saturday 28 February 2026 00:46:59 +0000 (0:00:01.352) 0:00:36.406 ***** 2026-02-28 00:58:46.468367 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.468386 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.468402 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.468419 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.468436 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.468454 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.468470 | orchestrator | 2026-02-28 00:58:46.468486 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-28 00:58:46.468503 | orchestrator | Saturday 28 February 2026 00:47:00 +0000 (0:00:00.998) 0:00:37.404 ***** 2026-02-28 00:58:46.468520 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.468537 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.468553 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.468569 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.468586 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.468670 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.468689 | orchestrator | 2026-02-28 00:58:46.468746 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-28 00:58:46.468766 | orchestrator | Saturday 28 February 2026 00:47:01 +0000 (0:00:00.832) 0:00:38.237 ***** 2026-02-28 00:58:46.468784 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.468800 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.468816 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.468833 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.468850 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.468864 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.468881 | orchestrator | 2026-02-28 00:58:46.468898 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-28 00:58:46.469181 | orchestrator | Saturday 28 February 2026 00:47:02 +0000 (0:00:00.728) 0:00:38.965 ***** 2026-02-28 00:58:46.469200 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.469218 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.469235 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.469251 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.469268 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.469285 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.469303 | orchestrator | 2026-02-28 00:58:46.469321 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-28 00:58:46.469339 | orchestrator | Saturday 28 February 2026 00:47:04 +0000 (0:00:02.357) 0:00:41.323 ***** 2026-02-28 00:58:46.469355 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.469371 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.469388 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.469406 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.469423 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.469441 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.469457 | orchestrator | 2026-02-28 00:58:46.469474 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-28 00:58:46.469492 | orchestrator | Saturday 28 February 2026 00:47:06 +0000 (0:00:01.441) 0:00:42.765 ***** 2026-02-28 00:58:46.469511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e', 'dm-uuid-LVM-oEvVqsETkumcFmfvX36Aswue9YtL0Ei3ctP892bqoVgrwRbVQy3lHoCDaUo4Po0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00', 'dm-uuid-LVM-NH1qV3EAURygPY7zLz8kOuc6LxLritDoFagI2LhLVeBWG1aSAHzbjJjK5kEppla2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2', 'dm-uuid-LVM-LhuaNiqb1aaE0rrIXkJmdId6DTmzxYz3XAcZ1m8S7wRs0cGLbhdKMSdJMJpGp7FH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.469808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DlmcX-vyA1-FdZ4-rgBO-p0T7-jRuf-2G0Fm4', 'scsi-0QEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d', 'scsi-SQEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.469857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0lv7fw-TjZi-LVNE-0ofO-4ikh-qx6U-rJVolm', 'scsi-0QEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd', 'scsi-SQEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.469891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0', 'scsi-SQEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.469910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79', 'dm-uuid-LVM-nco1HNB6DfIt66XyU5t0An12V8JIhY08K5rxDgsWq69tTojbp5MQly90yZNx9PcR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.469960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.469993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8', 'dm-uuid-LVM-XCQn1NXuiFygAu0FMb3HnncWfDliS40aFj9Jw2XHuSeYn6DkfwfnLsqCS3stU1fW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LVx0r5-4jjO-slpB-U2Vw-w1fq-rRDr-0rBMlv', 'scsi-0QEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185', 'scsi-SQEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kd3Qnk-expA-zOCH-MYLJ-G11h-yid9-r3LwJO', 'scsi-0QEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102', 'scsi-SQEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9', 'scsi-SQEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470207 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.470218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10', 'dm-uuid-LVM-BqbQ0yf6eu0XC1OVoMqEm5OgBM88FmsT5sbJbDh3Pd1We2bx9OYSm5g8PLfSa9mW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470267 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.470277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDF6Ml-BMxH-QAHp-FH9m-xT4N-KaRX-rJAGxo', 'scsi-0QEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660', 'scsi-SQEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWaxZe-zQDk-vASa-bRYd-KRho-lQym-x8ZyHi', 'scsi-0QEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14', 'scsi-SQEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0', 'scsi-SQEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470684 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.470704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part1', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part14', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part15', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part16', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470721 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.470732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470743 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.470754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 00:58:46.470859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part1', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part14', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part15', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part16', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 00:58:46.470888 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.470898 | orchestrator | 2026-02-28 00:58:46.470909 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-28 00:58:46.470919 | orchestrator | Saturday 28 February 2026 00:47:09 +0000 (0:00:02.845) 0:00:45.610 ***** 2026-02-28 00:58:46.470930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2', 'dm-uuid-LVM-LhuaNiqb1aaE0rrIXkJmdId6DTmzxYz3XAcZ1m8S7wRs0cGLbhdKMSdJMJpGp7FH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.470948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79', 'dm-uuid-LVM-nco1HNB6DfIt66XyU5t0An12V8JIhY08K5rxDgsWq69tTojbp5MQly90yZNx9PcR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.470959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.470969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.470985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e', 'dm-uuid-LVM-oEvVqsETkumcFmfvX36Aswue9YtL0Ei3ctP892bqoVgrwRbVQy3lHoCDaUo4Po0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00', 'dm-uuid-LVM-NH1qV3EAURygPY7zLz8kOuc6LxLritDoFagI2LhLVeBWG1aSAHzbjJjK5kEppla2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471051 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LVx0r5-4jjO-slpB-U2Vw-w1fq-rRDr-0rBMlv', 'scsi-0QEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185', 'scsi-SQEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kd3Qnk-expA-zOCH-MYLJ-G11h-yid9-r3LwJO', 'scsi-0QEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102', 'scsi-SQEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9', 'scsi-SQEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471260 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471289 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471304 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.471321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8', 'dm-uuid-LVM-XCQn1NXuiFygAu0FMb3HnncWfDliS40aFj9Jw2XHuSeYn6DkfwfnLsqCS3stU1fW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10', 'dm-uuid-LVM-BqbQ0yf6eu0XC1OVoMqEm5OgBM88FmsT5sbJbDh3Pd1We2bx9OYSm5g8PLfSa9mW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471456 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471581 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DlmcX-vyA1-FdZ4-rgBO-p0T7-jRuf-2G0Fm4', 'scsi-0QEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d', 'scsi-SQEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0lv7fw-TjZi-LVNE-0ofO-4ikh-qx6U-rJVolm', 'scsi-0QEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd', 'scsi-SQEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0', 'scsi-SQEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471708 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDF6Ml-BMxH-QAHp-FH9m-xT4N-KaRX-rJAGxo', 'scsi-0QEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660', 'scsi-SQEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWaxZe-zQDk-vASa-bRYd-KRho-lQym-x8ZyHi', 'scsi-0QEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14', 'scsi-SQEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0', 'scsi-SQEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471789 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471812 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471830 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471861 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471899 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471916 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0524a285-b1bf-4737-b91a-ca6b10871a2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471974 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.471992 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.472056 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472107 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472128 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472184 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472213 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472231 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.472249 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.472267 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472285 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part1', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part14', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part15', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part16', 'scsi-SQEMU_QEMU_HARDDISK_c9d3095c-7509-4fbe-ae74-63c4ac873621-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472333 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472352 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472370 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.472386 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472401 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472417 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472466 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472495 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472512 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472531 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part1', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part14', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part15', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part16', 'scsi-SQEMU_QEMU_HARDDISK_c4377a96-07c8-49d6-8f0b-9a269b92cb14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472566 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 00:58:46.472584 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.472594 | orchestrator | 2026-02-28 00:58:46.472639 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-28 00:58:46.472651 | orchestrator | Saturday 28 February 2026 00:47:11 +0000 (0:00:02.762) 0:00:48.373 ***** 2026-02-28 00:58:46.472662 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.472672 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.472682 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.472692 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.472702 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.472712 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.472722 | orchestrator | 2026-02-28 00:58:46.472732 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-28 00:58:46.472741 | orchestrator | Saturday 28 February 2026 00:47:14 +0000 (0:00:02.771) 0:00:51.145 ***** 2026-02-28 00:58:46.472751 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.472761 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.472771 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.472781 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.472790 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.472800 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.472809 | orchestrator | 2026-02-28 00:58:46.472819 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 00:58:46.472829 | orchestrator | Saturday 28 February 2026 00:47:15 +0000 (0:00:01.053) 0:00:52.198 ***** 2026-02-28 00:58:46.472839 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.472849 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.472859 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.472869 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.472878 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.472888 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.472898 | orchestrator | 2026-02-28 00:58:46.472907 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 00:58:46.472917 | orchestrator | Saturday 28 February 2026 00:47:17 +0000 (0:00:01.759) 0:00:53.958 ***** 2026-02-28 00:58:46.472927 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.472937 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.472946 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.472956 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.472966 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.472982 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.472992 | orchestrator | 2026-02-28 00:58:46.473002 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 00:58:46.473012 | orchestrator | Saturday 28 February 2026 00:47:18 +0000 (0:00:01.076) 0:00:55.035 ***** 2026-02-28 00:58:46.473022 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.473032 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.473041 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.473051 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.473060 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.473070 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.473080 | orchestrator | 2026-02-28 00:58:46.473089 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 00:58:46.473105 | orchestrator | Saturday 28 February 2026 00:47:20 +0000 (0:00:02.132) 0:00:57.168 ***** 2026-02-28 00:58:46.473123 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.473139 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.473156 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.473173 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.473190 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.473207 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.473223 | orchestrator | 2026-02-28 00:58:46.473237 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-28 00:58:46.473247 | orchestrator | Saturday 28 February 2026 00:47:21 +0000 (0:00:00.892) 0:00:58.060 ***** 2026-02-28 00:58:46.473257 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-28 00:58:46.473267 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-28 00:58:46.473277 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-28 00:58:46.473287 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-28 00:58:46.473297 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-28 00:58:46.473306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:58:46.473316 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-28 00:58:46.473325 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-28 00:58:46.473335 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-28 00:58:46.473345 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-28 00:58:46.473354 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-28 00:58:46.473364 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-28 00:58:46.473373 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-28 00:58:46.473383 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-28 00:58:46.473393 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-28 00:58:46.473402 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-28 00:58:46.473412 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-28 00:58:46.473422 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-28 00:58:46.473432 | orchestrator | 2026-02-28 00:58:46.473441 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-28 00:58:46.473451 | orchestrator | Saturday 28 February 2026 00:47:26 +0000 (0:00:04.677) 0:01:02.737 ***** 2026-02-28 00:58:46.473461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 00:58:46.473476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 00:58:46.473487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 00:58:46.473496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 00:58:46.473505 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 00:58:46.473515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 00:58:46.473525 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.473535 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 00:58:46.473560 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 00:58:46.473570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 00:58:46.473580 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.473589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:46.473599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:46.473754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:46.473801 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.473812 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-28 00:58:46.473822 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-28 00:58:46.473832 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-28 00:58:46.473842 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.473851 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.473861 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-28 00:58:46.473870 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-28 00:58:46.473880 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-28 00:58:46.473890 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.473900 | orchestrator | 2026-02-28 00:58:46.473910 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-28 00:58:46.473920 | orchestrator | Saturday 28 February 2026 00:47:27 +0000 (0:00:01.472) 0:01:04.210 ***** 2026-02-28 00:58:46.473930 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.473939 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.473949 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.473957 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-02-28 00:58:46.473964 | orchestrator | 2026-02-28 00:58:46.473971 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 00:58:46.473978 | orchestrator | Saturday 28 February 2026 00:47:29 +0000 (0:00:01.935) 0:01:06.146 ***** 2026-02-28 00:58:46.473985 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.473992 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.473999 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.474006 | orchestrator | 2026-02-28 00:58:46.474012 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 00:58:46.474203 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:00.556) 0:01:06.702 ***** 2026-02-28 00:58:46.474211 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.474218 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.474225 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.474232 | orchestrator | 2026-02-28 00:58:46.474239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 00:58:46.474246 | orchestrator | Saturday 28 February 2026 00:47:30 +0000 (0:00:00.493) 0:01:07.196 ***** 2026-02-28 00:58:46.474252 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.474259 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.474266 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.474273 | orchestrator | 2026-02-28 00:58:46.474279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 00:58:46.474286 | orchestrator | Saturday 28 February 2026 00:47:31 +0000 (0:00:00.665) 0:01:07.861 ***** 2026-02-28 00:58:46.474293 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.474301 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.474308 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.474315 | orchestrator | 2026-02-28 00:58:46.474322 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 00:58:46.474329 | orchestrator | Saturday 28 February 2026 00:47:32 +0000 (0:00:00.740) 0:01:08.602 ***** 2026-02-28 00:58:46.474346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.474353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.474360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.474367 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.474374 | orchestrator | 2026-02-28 00:58:46.474381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 00:58:46.474388 | orchestrator | Saturday 28 February 2026 00:47:32 +0000 (0:00:00.822) 0:01:09.425 ***** 2026-02-28 00:58:46.474395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.474402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.474408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.474415 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.474422 | orchestrator | 2026-02-28 00:58:46.474429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 00:58:46.474436 | orchestrator | Saturday 28 February 2026 00:47:33 +0000 (0:00:00.551) 0:01:09.976 ***** 2026-02-28 00:58:46.474443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.474449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.474456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.474463 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.474470 | orchestrator | 2026-02-28 00:58:46.474482 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 00:58:46.474489 | orchestrator | Saturday 28 February 2026 00:47:33 +0000 (0:00:00.439) 0:01:10.416 ***** 2026-02-28 00:58:46.474497 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.474503 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.474510 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.474517 | orchestrator | 2026-02-28 00:58:46.474524 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 00:58:46.474531 | orchestrator | Saturday 28 February 2026 00:47:34 +0000 (0:00:00.422) 0:01:10.838 ***** 2026-02-28 00:58:46.474537 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 00:58:46.474544 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 00:58:46.474567 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:58:46.474574 | orchestrator | 2026-02-28 00:58:46.474581 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-28 00:58:46.474588 | orchestrator | Saturday 28 February 2026 00:47:35 +0000 (0:00:00.928) 0:01:11.766 ***** 2026-02-28 00:58:46.474594 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:46.474602 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:46.474636 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:46.474650 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 00:58:46.474659 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 00:58:46.474666 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 00:58:46.474673 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 00:58:46.474680 | orchestrator | 2026-02-28 00:58:46.474686 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-28 00:58:46.474693 | orchestrator | Saturday 28 February 2026 00:47:36 +0000 (0:00:00.994) 0:01:12.760 ***** 2026-02-28 00:58:46.474700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:46.474707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:46.474713 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:46.474727 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 00:58:46.474734 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 00:58:46.474741 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 00:58:46.474748 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 00:58:46.474755 | orchestrator | 2026-02-28 00:58:46.474762 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.474769 | orchestrator | Saturday 28 February 2026 00:47:38 +0000 (0:00:02.036) 0:01:14.797 ***** 2026-02-28 00:58:46.474776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.474784 | orchestrator | 2026-02-28 00:58:46.474792 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.474800 | orchestrator | Saturday 28 February 2026 00:47:39 +0000 (0:00:01.375) 0:01:16.173 ***** 2026-02-28 00:58:46.474808 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.474816 | orchestrator | 2026-02-28 00:58:46.474823 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.474831 | orchestrator | Saturday 28 February 2026 00:47:41 +0000 (0:00:01.384) 0:01:17.558 ***** 2026-02-28 00:58:46.474839 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.474846 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.474854 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.474861 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.474869 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.474877 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.474885 | orchestrator | 2026-02-28 00:58:46.474893 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.474900 | orchestrator | Saturday 28 February 2026 00:47:42 +0000 (0:00:01.879) 0:01:19.437 ***** 2026-02-28 00:58:46.474908 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.474916 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.474923 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.474931 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.474939 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.474946 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.474954 | orchestrator | 2026-02-28 00:58:46.474962 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.474970 | orchestrator | Saturday 28 February 2026 00:47:44 +0000 (0:00:01.965) 0:01:21.403 ***** 2026-02-28 00:58:46.474977 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.474985 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.474992 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475000 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475007 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475015 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475023 | orchestrator | 2026-02-28 00:58:46.475030 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.475037 | orchestrator | Saturday 28 February 2026 00:47:47 +0000 (0:00:02.543) 0:01:23.947 ***** 2026-02-28 00:58:46.475044 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475050 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475057 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475068 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475075 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475082 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475088 | orchestrator | 2026-02-28 00:58:46.475095 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.475102 | orchestrator | Saturday 28 February 2026 00:47:49 +0000 (0:00:01.725) 0:01:25.672 ***** 2026-02-28 00:58:46.475113 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475120 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475127 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475133 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.475140 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.475153 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.475160 | orchestrator | 2026-02-28 00:58:46.475167 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.475173 | orchestrator | Saturday 28 February 2026 00:47:51 +0000 (0:00:01.930) 0:01:27.603 ***** 2026-02-28 00:58:46.475180 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475187 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475194 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475200 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475214 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475220 | orchestrator | 2026-02-28 00:58:46.475227 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.475234 | orchestrator | Saturday 28 February 2026 00:47:52 +0000 (0:00:00.865) 0:01:28.468 ***** 2026-02-28 00:58:46.475241 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475247 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475254 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475261 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475268 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475274 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475281 | orchestrator | 2026-02-28 00:58:46.475288 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.475295 | orchestrator | Saturday 28 February 2026 00:47:53 +0000 (0:00:01.381) 0:01:29.850 ***** 2026-02-28 00:58:46.475302 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475309 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475315 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475322 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.475329 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.475336 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.475343 | orchestrator | 2026-02-28 00:58:46.475350 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.475357 | orchestrator | Saturday 28 February 2026 00:47:55 +0000 (0:00:01.786) 0:01:31.637 ***** 2026-02-28 00:58:46.475364 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475370 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475377 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475384 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.475390 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.475397 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.475404 | orchestrator | 2026-02-28 00:58:46.475410 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.475417 | orchestrator | Saturday 28 February 2026 00:47:56 +0000 (0:00:01.606) 0:01:33.243 ***** 2026-02-28 00:58:46.475424 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475431 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475438 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475444 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475451 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475458 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475465 | orchestrator | 2026-02-28 00:58:46.475472 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.475479 | orchestrator | Saturday 28 February 2026 00:47:57 +0000 (0:00:00.788) 0:01:34.031 ***** 2026-02-28 00:58:46.475486 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475492 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475499 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475511 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.475517 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.475524 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.475531 | orchestrator | 2026-02-28 00:58:46.475538 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.475545 | orchestrator | Saturday 28 February 2026 00:47:58 +0000 (0:00:01.064) 0:01:35.096 ***** 2026-02-28 00:58:46.475551 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475558 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475565 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475572 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475578 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475585 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475592 | orchestrator | 2026-02-28 00:58:46.475598 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.475605 | orchestrator | Saturday 28 February 2026 00:47:59 +0000 (0:00:00.747) 0:01:35.844 ***** 2026-02-28 00:58:46.475631 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475638 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475645 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475652 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475659 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475665 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475672 | orchestrator | 2026-02-28 00:58:46.475679 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.475686 | orchestrator | Saturday 28 February 2026 00:48:00 +0000 (0:00:01.177) 0:01:37.021 ***** 2026-02-28 00:58:46.475693 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475699 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475706 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475713 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475720 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475726 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475733 | orchestrator | 2026-02-28 00:58:46.475740 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.475747 | orchestrator | Saturday 28 February 2026 00:48:01 +0000 (0:00:00.725) 0:01:37.747 ***** 2026-02-28 00:58:46.475754 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475760 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475775 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475782 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475789 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475796 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475803 | orchestrator | 2026-02-28 00:58:46.475810 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.475817 | orchestrator | Saturday 28 February 2026 00:48:03 +0000 (0:00:01.912) 0:01:39.659 ***** 2026-02-28 00:58:46.475823 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475830 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475837 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475844 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.475855 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.475862 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.475869 | orchestrator | 2026-02-28 00:58:46.475876 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.475882 | orchestrator | Saturday 28 February 2026 00:48:04 +0000 (0:00:00.939) 0:01:40.598 ***** 2026-02-28 00:58:46.475889 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.475896 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.475903 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.475909 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.475916 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.475923 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.475930 | orchestrator | 2026-02-28 00:58:46.475937 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.475948 | orchestrator | Saturday 28 February 2026 00:48:05 +0000 (0:00:01.492) 0:01:42.091 ***** 2026-02-28 00:58:46.475955 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.475962 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.475969 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.475975 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.475982 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.476048 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.476055 | orchestrator | 2026-02-28 00:58:46.476062 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.476069 | orchestrator | Saturday 28 February 2026 00:48:06 +0000 (0:00:00.989) 0:01:43.082 ***** 2026-02-28 00:58:46.476076 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.476083 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.476089 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.476096 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.476103 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.476110 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.476116 | orchestrator | 2026-02-28 00:58:46.476123 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-28 00:58:46.476130 | orchestrator | Saturday 28 February 2026 00:48:08 +0000 (0:00:01.763) 0:01:44.846 ***** 2026-02-28 00:58:46.476137 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.476143 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.476150 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.476157 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.476164 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.476171 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.476177 | orchestrator | 2026-02-28 00:58:46.476185 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-28 00:58:46.476191 | orchestrator | Saturday 28 February 2026 00:48:10 +0000 (0:00:02.502) 0:01:47.348 ***** 2026-02-28 00:58:46.476198 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.476205 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.476212 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.476218 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.476225 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.476232 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.476239 | orchestrator | 2026-02-28 00:58:46.476246 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-28 00:58:46.476253 | orchestrator | Saturday 28 February 2026 00:48:13 +0000 (0:00:02.889) 0:01:50.238 ***** 2026-02-28 00:58:46.476260 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.476267 | orchestrator | 2026-02-28 00:58:46.476274 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-28 00:58:46.476281 | orchestrator | Saturday 28 February 2026 00:48:15 +0000 (0:00:01.459) 0:01:51.698 ***** 2026-02-28 00:58:46.476288 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.476295 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.476302 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.476309 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.476316 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.476322 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.476329 | orchestrator | 2026-02-28 00:58:46.476336 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-28 00:58:46.476343 | orchestrator | Saturday 28 February 2026 00:48:16 +0000 (0:00:00.933) 0:01:52.632 ***** 2026-02-28 00:58:46.476350 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.476357 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.476364 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.476370 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.476377 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.476389 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.476396 | orchestrator | 2026-02-28 00:58:46.476403 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-28 00:58:46.476409 | orchestrator | Saturday 28 February 2026 00:48:17 +0000 (0:00:01.390) 0:01:54.022 ***** 2026-02-28 00:58:46.476416 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:46.476423 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:46.476430 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:46.476437 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:46.476444 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:46.476455 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:46.476463 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:46.476470 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:46.476477 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:46.476484 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-28 00:58:46.476496 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:46.476503 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-28 00:58:46.476510 | orchestrator | 2026-02-28 00:58:46.476517 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-28 00:58:46.476524 | orchestrator | Saturday 28 February 2026 00:48:19 +0000 (0:00:02.195) 0:01:56.217 ***** 2026-02-28 00:58:46.476531 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.476538 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.476545 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.476552 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.476559 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.476566 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.476573 | orchestrator | 2026-02-28 00:58:46.476580 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-28 00:58:46.476586 | orchestrator | Saturday 28 February 2026 00:48:21 +0000 (0:00:01.775) 0:01:57.993 ***** 2026-02-28 00:58:46.476593 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.476600 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.476628 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.476637 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.476644 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.476651 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.476658 | orchestrator | 2026-02-28 00:58:46.476665 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-28 00:58:46.476672 | orchestrator | Saturday 28 February 2026 00:48:22 +0000 (0:00:01.047) 0:01:59.040 ***** 2026-02-28 00:58:46.476679 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.476685 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.476692 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.476699 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.476705 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.476712 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.476718 | orchestrator | 2026-02-28 00:58:46.476725 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-28 00:58:46.476732 | orchestrator | Saturday 28 February 2026 00:48:23 +0000 (0:00:01.186) 0:02:00.227 ***** 2026-02-28 00:58:46.476738 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.476745 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.476760 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.476767 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.476773 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.476780 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.476787 | orchestrator | 2026-02-28 00:58:46.476793 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-28 00:58:46.476800 | orchestrator | Saturday 28 February 2026 00:48:24 +0000 (0:00:00.589) 0:02:00.816 ***** 2026-02-28 00:58:46.476807 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.476814 | orchestrator | 2026-02-28 00:58:46.476821 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-28 00:58:46.476827 | orchestrator | Saturday 28 February 2026 00:48:25 +0000 (0:00:01.169) 0:02:01.985 ***** 2026-02-28 00:58:46.476834 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.476841 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.476848 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.476854 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.476886 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.476894 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.476901 | orchestrator | 2026-02-28 00:58:46.476907 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-28 00:58:46.476914 | orchestrator | Saturday 28 February 2026 00:49:15 +0000 (0:00:49.493) 0:02:51.479 ***** 2026-02-28 00:58:46.476921 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:46.476928 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:46.476935 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:46.476941 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.476948 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:46.476955 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:46.476962 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:46.476969 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.476976 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:46.476983 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:46.476990 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:46.476996 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477004 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:46.477010 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:46.477021 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:46.477028 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477035 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:46.477042 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:46.477049 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:46.477056 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477069 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-28 00:58:46.477076 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-28 00:58:46.477083 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-28 00:58:46.477090 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477097 | orchestrator | 2026-02-28 00:58:46.477104 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-28 00:58:46.477120 | orchestrator | Saturday 28 February 2026 00:49:16 +0000 (0:00:01.194) 0:02:52.674 ***** 2026-02-28 00:58:46.477127 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477134 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477141 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477147 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477154 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477161 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477167 | orchestrator | 2026-02-28 00:58:46.477174 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-28 00:58:46.477181 | orchestrator | Saturday 28 February 2026 00:49:17 +0000 (0:00:01.330) 0:02:54.004 ***** 2026-02-28 00:58:46.477188 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477195 | orchestrator | 2026-02-28 00:58:46.477202 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-28 00:58:46.477208 | orchestrator | Saturday 28 February 2026 00:49:17 +0000 (0:00:00.353) 0:02:54.358 ***** 2026-02-28 00:58:46.477215 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477222 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477229 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477236 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477243 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477250 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477257 | orchestrator | 2026-02-28 00:58:46.477263 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-28 00:58:46.477270 | orchestrator | Saturday 28 February 2026 00:49:18 +0000 (0:00:01.075) 0:02:55.434 ***** 2026-02-28 00:58:46.477277 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477284 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477290 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477297 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477304 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477311 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477317 | orchestrator | 2026-02-28 00:58:46.477324 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-28 00:58:46.477331 | orchestrator | Saturday 28 February 2026 00:49:20 +0000 (0:00:01.134) 0:02:56.569 ***** 2026-02-28 00:58:46.477338 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477345 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477351 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477358 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477365 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477371 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477378 | orchestrator | 2026-02-28 00:58:46.477385 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-28 00:58:46.477392 | orchestrator | Saturday 28 February 2026 00:49:21 +0000 (0:00:01.185) 0:02:57.754 ***** 2026-02-28 00:58:46.477399 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.477406 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.477412 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.477419 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.477426 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.477433 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.477440 | orchestrator | 2026-02-28 00:58:46.477447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-28 00:58:46.477454 | orchestrator | Saturday 28 February 2026 00:49:24 +0000 (0:00:02.929) 0:03:00.683 ***** 2026-02-28 00:58:46.477460 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.477467 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.477474 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.477481 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.477487 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.477494 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.477501 | orchestrator | 2026-02-28 00:58:46.477514 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-28 00:58:46.477521 | orchestrator | Saturday 28 February 2026 00:49:25 +0000 (0:00:01.133) 0:03:01.817 ***** 2026-02-28 00:58:46.477528 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.477536 | orchestrator | 2026-02-28 00:58:46.477542 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-28 00:58:46.477549 | orchestrator | Saturday 28 February 2026 00:49:27 +0000 (0:00:01.650) 0:03:03.467 ***** 2026-02-28 00:58:46.477556 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477563 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477570 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477576 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477583 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477590 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477596 | orchestrator | 2026-02-28 00:58:46.477603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-28 00:58:46.477664 | orchestrator | Saturday 28 February 2026 00:49:28 +0000 (0:00:01.302) 0:03:04.769 ***** 2026-02-28 00:58:46.477677 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477689 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477696 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477703 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477710 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477717 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477723 | orchestrator | 2026-02-28 00:58:46.477730 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-28 00:58:46.477737 | orchestrator | Saturday 28 February 2026 00:49:29 +0000 (0:00:01.368) 0:03:06.138 ***** 2026-02-28 00:58:46.477744 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477751 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477764 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477771 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477779 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477786 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477793 | orchestrator | 2026-02-28 00:58:46.477801 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-28 00:58:46.477808 | orchestrator | Saturday 28 February 2026 00:49:30 +0000 (0:00:01.255) 0:03:07.394 ***** 2026-02-28 00:58:46.477815 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477822 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477830 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477838 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477845 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477853 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477861 | orchestrator | 2026-02-28 00:58:46.477868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-28 00:58:46.477875 | orchestrator | Saturday 28 February 2026 00:49:31 +0000 (0:00:00.983) 0:03:08.377 ***** 2026-02-28 00:58:46.477883 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477890 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477897 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477905 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477912 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477919 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477926 | orchestrator | 2026-02-28 00:58:46.477934 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-28 00:58:46.477941 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:01.086) 0:03:09.464 ***** 2026-02-28 00:58:46.477949 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.477956 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.477963 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.477976 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.477983 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.477991 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.477998 | orchestrator | 2026-02-28 00:58:46.478005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-28 00:58:46.478047 | orchestrator | Saturday 28 February 2026 00:49:33 +0000 (0:00:00.915) 0:03:10.379 ***** 2026-02-28 00:58:46.478057 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.478065 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.478073 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.478080 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.478088 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.478095 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.478103 | orchestrator | 2026-02-28 00:58:46.478110 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-28 00:58:46.478118 | orchestrator | Saturday 28 February 2026 00:49:35 +0000 (0:00:01.491) 0:03:11.871 ***** 2026-02-28 00:58:46.478125 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.478132 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.478139 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.478147 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.478154 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.478191 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.478200 | orchestrator | 2026-02-28 00:58:46.478207 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-28 00:58:46.478214 | orchestrator | Saturday 28 February 2026 00:49:36 +0000 (0:00:01.089) 0:03:12.960 ***** 2026-02-28 00:58:46.478222 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.478229 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.478237 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.478244 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.478252 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.478259 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.478266 | orchestrator | 2026-02-28 00:58:46.478274 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-28 00:58:46.478281 | orchestrator | Saturday 28 February 2026 00:49:38 +0000 (0:00:01.538) 0:03:14.499 ***** 2026-02-28 00:58:46.478289 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.478296 | orchestrator | 2026-02-28 00:58:46.478304 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-28 00:58:46.478311 | orchestrator | Saturday 28 February 2026 00:49:39 +0000 (0:00:01.599) 0:03:16.099 ***** 2026-02-28 00:58:46.478318 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-28 00:58:46.478326 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-28 00:58:46.478333 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-28 00:58:46.478341 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-28 00:58:46.478348 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-28 00:58:46.478355 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-28 00:58:46.478362 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-28 00:58:46.478370 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-28 00:58:46.478377 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:46.478384 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-28 00:58:46.478392 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:46.478403 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-28 00:58:46.478411 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-28 00:58:46.478418 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:46.478431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:46.478438 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:46.478446 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:46.478453 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-28 00:58:46.478475 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:46.478483 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:46.478490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:46.478497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:46.478505 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:46.478512 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:46.478520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-28 00:58:46.478527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:46.478534 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:46.478541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:46.478548 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:46.478556 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:46.478563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-28 00:58:46.478570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:46.478577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:46.478585 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:46.478592 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:46.478600 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:46.478627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-28 00:58:46.478637 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:46.478644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:46.478651 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:46.478658 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:46.478665 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:46.478673 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-28 00:58:46.478680 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:46.478687 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:46.478694 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:46.478701 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:46.478708 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:46.478715 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:46.478723 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-28 00:58:46.478730 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:46.478737 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:46.478744 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:46.478751 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:46.478758 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:46.478765 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-28 00:58:46.478778 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:46.478785 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:46.478792 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:46.478799 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:46.478807 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:46.478814 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-28 00:58:46.478821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:46.478828 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:46.478835 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:46.478843 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:46.478850 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-28 00:58:46.478857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:46.478864 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:46.478871 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:46.478883 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:46.478890 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:46.478898 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:46.478905 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:46.478912 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-28 00:58:46.478919 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:46.478932 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:46.478939 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-28 00:58:46.478946 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:46.478953 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:46.478961 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:46.478968 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:46.478975 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-28 00:58:46.478982 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-28 00:58:46.478990 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-28 00:58:46.478997 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-28 00:58:46.479004 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-28 00:58:46.479011 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-28 00:58:46.479018 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-28 00:58:46.479025 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-28 00:58:46.479033 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-28 00:58:46.479040 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-28 00:58:46.479047 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-28 00:58:46.479054 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-28 00:58:46.479061 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-28 00:58:46.479068 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-28 00:58:46.479075 | orchestrator | 2026-02-28 00:58:46.479083 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-28 00:58:46.479095 | orchestrator | Saturday 28 February 2026 00:49:47 +0000 (0:00:07.404) 0:03:23.503 ***** 2026-02-28 00:58:46.479102 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479110 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479117 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479124 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.479132 | orchestrator | 2026-02-28 00:58:46.479139 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-28 00:58:46.479146 | orchestrator | Saturday 28 February 2026 00:49:48 +0000 (0:00:01.269) 0:03:24.773 ***** 2026-02-28 00:58:46.479155 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.479163 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.479170 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.479177 | orchestrator | 2026-02-28 00:58:46.479185 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-28 00:58:46.479192 | orchestrator | Saturday 28 February 2026 00:49:49 +0000 (0:00:01.492) 0:03:26.265 ***** 2026-02-28 00:58:46.479199 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.479206 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.479214 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.479221 | orchestrator | 2026-02-28 00:58:46.479228 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-28 00:58:46.479235 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:01.365) 0:03:27.631 ***** 2026-02-28 00:58:46.479243 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.479250 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.479257 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.479265 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479272 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479279 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479286 | orchestrator | 2026-02-28 00:58:46.479293 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-28 00:58:46.479301 | orchestrator | Saturday 28 February 2026 00:49:51 +0000 (0:00:00.738) 0:03:28.369 ***** 2026-02-28 00:58:46.479308 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.479315 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.479323 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.479330 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479337 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479344 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479351 | orchestrator | 2026-02-28 00:58:46.479368 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-28 00:58:46.479375 | orchestrator | Saturday 28 February 2026 00:49:52 +0000 (0:00:01.038) 0:03:29.408 ***** 2026-02-28 00:58:46.479383 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.479390 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.479397 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.479405 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479412 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479419 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479426 | orchestrator | 2026-02-28 00:58:46.479439 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-28 00:58:46.479453 | orchestrator | Saturday 28 February 2026 00:49:54 +0000 (0:00:01.264) 0:03:30.672 ***** 2026-02-28 00:58:46.479461 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.479468 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.479475 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.479483 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479490 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479497 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479504 | orchestrator | 2026-02-28 00:58:46.479512 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-28 00:58:46.479519 | orchestrator | Saturday 28 February 2026 00:49:55 +0000 (0:00:01.019) 0:03:31.691 ***** 2026-02-28 00:58:46.479526 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.479534 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.479541 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.479548 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479555 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479563 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479570 | orchestrator | 2026-02-28 00:58:46.479577 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-28 00:58:46.479584 | orchestrator | Saturday 28 February 2026 00:49:56 +0000 (0:00:00.835) 0:03:32.527 ***** 2026-02-28 00:58:46.479592 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.479599 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.479645 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.479655 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479663 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479670 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479677 | orchestrator | 2026-02-28 00:58:46.479685 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-28 00:58:46.479692 | orchestrator | Saturday 28 February 2026 00:49:57 +0000 (0:00:01.132) 0:03:33.659 ***** 2026-02-28 00:58:46.479699 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.479707 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.479714 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.479722 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479728 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479735 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479742 | orchestrator | 2026-02-28 00:58:46.479749 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-28 00:58:46.479755 | orchestrator | Saturday 28 February 2026 00:49:58 +0000 (0:00:00.853) 0:03:34.512 ***** 2026-02-28 00:58:46.479762 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.479769 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.479776 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.479782 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479789 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479796 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479803 | orchestrator | 2026-02-28 00:58:46.479810 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-28 00:58:46.479816 | orchestrator | Saturday 28 February 2026 00:49:59 +0000 (0:00:01.314) 0:03:35.827 ***** 2026-02-28 00:58:46.479823 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479830 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479837 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479844 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.479851 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.479857 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.479864 | orchestrator | 2026-02-28 00:58:46.479871 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-28 00:58:46.479878 | orchestrator | Saturday 28 February 2026 00:50:02 +0000 (0:00:03.113) 0:03:38.940 ***** 2026-02-28 00:58:46.479891 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.479898 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.479904 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479911 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479918 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.479925 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479931 | orchestrator | 2026-02-28 00:58:46.479938 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-28 00:58:46.479945 | orchestrator | Saturday 28 February 2026 00:50:03 +0000 (0:00:01.325) 0:03:40.266 ***** 2026-02-28 00:58:46.479952 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.479959 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.479965 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.479972 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.479979 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.479985 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.479992 | orchestrator | 2026-02-28 00:58:46.479999 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-28 00:58:46.480006 | orchestrator | Saturday 28 February 2026 00:50:04 +0000 (0:00:00.967) 0:03:41.233 ***** 2026-02-28 00:58:46.480012 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480019 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480026 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480033 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480039 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480046 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480053 | orchestrator | 2026-02-28 00:58:46.480059 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-28 00:58:46.480066 | orchestrator | Saturday 28 February 2026 00:50:06 +0000 (0:00:01.410) 0:03:42.644 ***** 2026-02-28 00:58:46.480077 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.480084 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.480091 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.480098 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480110 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480117 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480123 | orchestrator | 2026-02-28 00:58:46.480130 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-28 00:58:46.480137 | orchestrator | Saturday 28 February 2026 00:50:07 +0000 (0:00:01.490) 0:03:44.134 ***** 2026-02-28 00:58:46.480145 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-28 00:58:46.480155 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-28 00:58:46.480163 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-28 00:58:46.480170 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-28 00:58:46.480182 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480189 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-28 00:58:46.480196 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480203 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-28 00:58:46.480210 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480217 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480223 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480230 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480237 | orchestrator | 2026-02-28 00:58:46.480243 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-28 00:58:46.480250 | orchestrator | Saturday 28 February 2026 00:50:08 +0000 (0:00:01.246) 0:03:45.381 ***** 2026-02-28 00:58:46.480257 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480263 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480270 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480277 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480283 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480290 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480297 | orchestrator | 2026-02-28 00:58:46.480303 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-28 00:58:46.480310 | orchestrator | Saturday 28 February 2026 00:50:09 +0000 (0:00:00.871) 0:03:46.253 ***** 2026-02-28 00:58:46.480317 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480323 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480330 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480337 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480344 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480350 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480357 | orchestrator | 2026-02-28 00:58:46.480364 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 00:58:46.480370 | orchestrator | Saturday 28 February 2026 00:50:10 +0000 (0:00:00.998) 0:03:47.251 ***** 2026-02-28 00:58:46.480377 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480384 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480390 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480397 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480404 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480411 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480417 | orchestrator | 2026-02-28 00:58:46.480428 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 00:58:46.480435 | orchestrator | Saturday 28 February 2026 00:50:11 +0000 (0:00:00.770) 0:03:48.021 ***** 2026-02-28 00:58:46.480441 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480448 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480455 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480461 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480468 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480475 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480481 | orchestrator | 2026-02-28 00:58:46.480488 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 00:58:46.480505 | orchestrator | Saturday 28 February 2026 00:50:13 +0000 (0:00:01.443) 0:03:49.465 ***** 2026-02-28 00:58:46.480513 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480519 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.480526 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.480533 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480539 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480546 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480553 | orchestrator | 2026-02-28 00:58:46.480560 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 00:58:46.480566 | orchestrator | Saturday 28 February 2026 00:50:14 +0000 (0:00:01.938) 0:03:51.404 ***** 2026-02-28 00:58:46.480573 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.480580 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.480587 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.480593 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480600 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480622 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480630 | orchestrator | 2026-02-28 00:58:46.480638 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 00:58:46.480644 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:02.351) 0:03:53.756 ***** 2026-02-28 00:58:46.480651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.480658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.480665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.480671 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480678 | orchestrator | 2026-02-28 00:58:46.480685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 00:58:46.480692 | orchestrator | Saturday 28 February 2026 00:50:17 +0000 (0:00:00.650) 0:03:54.407 ***** 2026-02-28 00:58:46.480699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.480705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.480712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.480719 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480725 | orchestrator | 2026-02-28 00:58:46.480733 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 00:58:46.480740 | orchestrator | Saturday 28 February 2026 00:50:18 +0000 (0:00:00.574) 0:03:54.981 ***** 2026-02-28 00:58:46.480746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.480753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.480760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.480766 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.480773 | orchestrator | 2026-02-28 00:58:46.480780 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 00:58:46.480787 | orchestrator | Saturday 28 February 2026 00:50:19 +0000 (0:00:00.628) 0:03:55.609 ***** 2026-02-28 00:58:46.480794 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.480800 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.480807 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.480814 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480820 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480827 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480834 | orchestrator | 2026-02-28 00:58:46.480840 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 00:58:46.480847 | orchestrator | Saturday 28 February 2026 00:50:20 +0000 (0:00:01.022) 0:03:56.632 ***** 2026-02-28 00:58:46.480854 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:58:46.480861 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 00:58:46.480867 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 00:58:46.480874 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-28 00:58:46.480928 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.480935 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-28 00:58:46.480941 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.480948 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-28 00:58:46.480955 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.480962 | orchestrator | 2026-02-28 00:58:46.480968 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-28 00:58:46.480975 | orchestrator | Saturday 28 February 2026 00:50:23 +0000 (0:00:03.083) 0:03:59.715 ***** 2026-02-28 00:58:46.480982 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.480989 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.480996 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.481002 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.481009 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.481016 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.481022 | orchestrator | 2026-02-28 00:58:46.481029 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:46.481036 | orchestrator | Saturday 28 February 2026 00:50:26 +0000 (0:00:03.680) 0:04:03.396 ***** 2026-02-28 00:58:46.481042 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.481049 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.481056 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.481062 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.481069 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.481076 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.481082 | orchestrator | 2026-02-28 00:58:46.481089 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-28 00:58:46.481100 | orchestrator | Saturday 28 February 2026 00:50:28 +0000 (0:00:01.109) 0:04:04.505 ***** 2026-02-28 00:58:46.481106 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481113 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.481120 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.481127 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.481134 | orchestrator | 2026-02-28 00:58:46.481140 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-28 00:58:46.481152 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:01.217) 0:04:05.723 ***** 2026-02-28 00:58:46.481159 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.481166 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.481173 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.481180 | orchestrator | 2026-02-28 00:58:46.481187 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-28 00:58:46.481193 | orchestrator | Saturday 28 February 2026 00:50:29 +0000 (0:00:00.355) 0:04:06.078 ***** 2026-02-28 00:58:46.481200 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.481207 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.481214 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.481221 | orchestrator | 2026-02-28 00:58:46.481228 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-28 00:58:46.481234 | orchestrator | Saturday 28 February 2026 00:50:31 +0000 (0:00:01.779) 0:04:07.858 ***** 2026-02-28 00:58:46.481241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:46.481248 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:46.481255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:46.481262 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.481268 | orchestrator | 2026-02-28 00:58:46.481275 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-28 00:58:46.481282 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:00.796) 0:04:08.654 ***** 2026-02-28 00:58:46.481289 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.481296 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.481308 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.481315 | orchestrator | 2026-02-28 00:58:46.481322 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-28 00:58:46.481329 | orchestrator | Saturday 28 February 2026 00:50:32 +0000 (0:00:00.413) 0:04:09.068 ***** 2026-02-28 00:58:46.481336 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.481342 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.481349 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.481356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.481363 | orchestrator | 2026-02-28 00:58:46.481369 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-28 00:58:46.481376 | orchestrator | Saturday 28 February 2026 00:50:33 +0000 (0:00:01.005) 0:04:10.073 ***** 2026-02-28 00:58:46.481383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.481390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.481397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.481403 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481410 | orchestrator | 2026-02-28 00:58:46.481417 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-28 00:58:46.481424 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.470) 0:04:10.543 ***** 2026-02-28 00:58:46.481431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481438 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.481444 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.481451 | orchestrator | 2026-02-28 00:58:46.481458 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-28 00:58:46.481465 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.427) 0:04:10.971 ***** 2026-02-28 00:58:46.481471 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481478 | orchestrator | 2026-02-28 00:58:46.481485 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-28 00:58:46.481492 | orchestrator | Saturday 28 February 2026 00:50:34 +0000 (0:00:00.273) 0:04:11.245 ***** 2026-02-28 00:58:46.481498 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481505 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.481512 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.481518 | orchestrator | 2026-02-28 00:58:46.481525 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-28 00:58:46.481532 | orchestrator | Saturday 28 February 2026 00:50:35 +0000 (0:00:00.404) 0:04:11.649 ***** 2026-02-28 00:58:46.481538 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481545 | orchestrator | 2026-02-28 00:58:46.481552 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-28 00:58:46.481558 | orchestrator | Saturday 28 February 2026 00:50:35 +0000 (0:00:00.257) 0:04:11.907 ***** 2026-02-28 00:58:46.481565 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481572 | orchestrator | 2026-02-28 00:58:46.481578 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-28 00:58:46.481585 | orchestrator | Saturday 28 February 2026 00:50:35 +0000 (0:00:00.249) 0:04:12.157 ***** 2026-02-28 00:58:46.481591 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481598 | orchestrator | 2026-02-28 00:58:46.481605 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-28 00:58:46.481632 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:00.435) 0:04:12.592 ***** 2026-02-28 00:58:46.481644 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481656 | orchestrator | 2026-02-28 00:58:46.481668 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-28 00:58:46.481678 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:00.258) 0:04:12.850 ***** 2026-02-28 00:58:46.481688 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481700 | orchestrator | 2026-02-28 00:58:46.481711 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-28 00:58:46.481718 | orchestrator | Saturday 28 February 2026 00:50:36 +0000 (0:00:00.298) 0:04:13.148 ***** 2026-02-28 00:58:46.481725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.481732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.481738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.481745 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481752 | orchestrator | 2026-02-28 00:58:46.481758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-28 00:58:46.481771 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:00.470) 0:04:13.619 ***** 2026-02-28 00:58:46.481778 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481785 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.481791 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.481798 | orchestrator | 2026-02-28 00:58:46.481805 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-28 00:58:46.481811 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:00.386) 0:04:14.005 ***** 2026-02-28 00:58:46.481818 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481825 | orchestrator | 2026-02-28 00:58:46.481832 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-28 00:58:46.481838 | orchestrator | Saturday 28 February 2026 00:50:37 +0000 (0:00:00.255) 0:04:14.261 ***** 2026-02-28 00:58:46.481845 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.481852 | orchestrator | 2026-02-28 00:58:46.481859 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-28 00:58:46.481865 | orchestrator | Saturday 28 February 2026 00:50:38 +0000 (0:00:00.248) 0:04:14.509 ***** 2026-02-28 00:58:46.481872 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.481879 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.481886 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.481892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.481899 | orchestrator | 2026-02-28 00:58:46.481906 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-28 00:58:46.481913 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:01.444) 0:04:15.953 ***** 2026-02-28 00:58:46.481919 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.481926 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.481933 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.481940 | orchestrator | 2026-02-28 00:58:46.481946 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-28 00:58:46.481953 | orchestrator | Saturday 28 February 2026 00:50:39 +0000 (0:00:00.429) 0:04:16.383 ***** 2026-02-28 00:58:46.481960 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.481967 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.481974 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.481980 | orchestrator | 2026-02-28 00:58:46.481987 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-28 00:58:46.481994 | orchestrator | Saturday 28 February 2026 00:50:41 +0000 (0:00:01.343) 0:04:17.726 ***** 2026-02-28 00:58:46.482000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.482007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.482036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.482043 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.482050 | orchestrator | 2026-02-28 00:58:46.482056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-28 00:58:46.482063 | orchestrator | Saturday 28 February 2026 00:50:42 +0000 (0:00:01.374) 0:04:19.101 ***** 2026-02-28 00:58:46.482070 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.482077 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.482089 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.482096 | orchestrator | 2026-02-28 00:58:46.482102 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-28 00:58:46.482109 | orchestrator | Saturday 28 February 2026 00:50:43 +0000 (0:00:00.397) 0:04:19.498 ***** 2026-02-28 00:58:46.482116 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.482123 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.482129 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.482136 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.482143 | orchestrator | 2026-02-28 00:58:46.482149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-28 00:58:46.482156 | orchestrator | Saturday 28 February 2026 00:50:43 +0000 (0:00:00.932) 0:04:20.431 ***** 2026-02-28 00:58:46.482163 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.482170 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.482176 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.482183 | orchestrator | 2026-02-28 00:58:46.482190 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-28 00:58:46.482196 | orchestrator | Saturday 28 February 2026 00:50:44 +0000 (0:00:00.724) 0:04:21.156 ***** 2026-02-28 00:58:46.482203 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.482210 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.482217 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.482223 | orchestrator | 2026-02-28 00:58:46.482230 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-28 00:58:46.482237 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:01.562) 0:04:22.718 ***** 2026-02-28 00:58:46.482244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.482250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.482257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.482264 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.482270 | orchestrator | 2026-02-28 00:58:46.482277 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-28 00:58:46.482284 | orchestrator | Saturday 28 February 2026 00:50:46 +0000 (0:00:00.723) 0:04:23.441 ***** 2026-02-28 00:58:46.482294 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.482301 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.482308 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.482315 | orchestrator | 2026-02-28 00:58:46.482322 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-28 00:58:46.482328 | orchestrator | Saturday 28 February 2026 00:50:47 +0000 (0:00:00.326) 0:04:23.768 ***** 2026-02-28 00:58:46.482335 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.482342 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.482349 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.482355 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.482364 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.482384 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.482402 | orchestrator | 2026-02-28 00:58:46.482413 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-28 00:58:46.482424 | orchestrator | Saturday 28 February 2026 00:50:48 +0000 (0:00:01.238) 0:04:25.006 ***** 2026-02-28 00:58:46.482436 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.482447 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.482457 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.482469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.482481 | orchestrator | 2026-02-28 00:58:46.482491 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-28 00:58:46.482503 | orchestrator | Saturday 28 February 2026 00:50:49 +0000 (0:00:01.302) 0:04:26.308 ***** 2026-02-28 00:58:46.482523 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.482535 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.482547 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.482559 | orchestrator | 2026-02-28 00:58:46.482570 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-28 00:58:46.482578 | orchestrator | Saturday 28 February 2026 00:50:50 +0000 (0:00:00.436) 0:04:26.745 ***** 2026-02-28 00:58:46.482584 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.482591 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.482598 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.482604 | orchestrator | 2026-02-28 00:58:46.482652 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-28 00:58:46.482659 | orchestrator | Saturday 28 February 2026 00:50:51 +0000 (0:00:01.422) 0:04:28.168 ***** 2026-02-28 00:58:46.482666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:46.482673 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:46.482680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:46.482687 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.482694 | orchestrator | 2026-02-28 00:58:46.482701 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-28 00:58:46.482708 | orchestrator | Saturday 28 February 2026 00:50:52 +0000 (0:00:00.663) 0:04:28.831 ***** 2026-02-28 00:58:46.482715 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.482721 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.482728 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.482735 | orchestrator | 2026-02-28 00:58:46.482742 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-28 00:58:46.482749 | orchestrator | 2026-02-28 00:58:46.482756 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.482763 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:00.892) 0:04:29.724 ***** 2026-02-28 00:58:46.482770 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.482777 | orchestrator | 2026-02-28 00:58:46.482784 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.482791 | orchestrator | Saturday 28 February 2026 00:50:53 +0000 (0:00:00.642) 0:04:30.366 ***** 2026-02-28 00:58:46.482798 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.482805 | orchestrator | 2026-02-28 00:58:46.482812 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.482818 | orchestrator | Saturday 28 February 2026 00:50:54 +0000 (0:00:00.689) 0:04:31.056 ***** 2026-02-28 00:58:46.482825 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.482832 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.482839 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.482846 | orchestrator | 2026-02-28 00:58:46.482853 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.482860 | orchestrator | Saturday 28 February 2026 00:50:55 +0000 (0:00:01.249) 0:04:32.305 ***** 2026-02-28 00:58:46.482867 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.482873 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.482880 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.482887 | orchestrator | 2026-02-28 00:58:46.482894 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.482901 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.345) 0:04:32.651 ***** 2026-02-28 00:58:46.482908 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.482915 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.482922 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.482928 | orchestrator | 2026-02-28 00:58:46.482935 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.482951 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.370) 0:04:33.021 ***** 2026-02-28 00:58:46.482958 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.482965 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.482972 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.482979 | orchestrator | 2026-02-28 00:58:46.482986 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.482993 | orchestrator | Saturday 28 February 2026 00:50:56 +0000 (0:00:00.357) 0:04:33.379 ***** 2026-02-28 00:58:46.483000 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483007 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483014 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483021 | orchestrator | 2026-02-28 00:58:46.483032 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.483040 | orchestrator | Saturday 28 February 2026 00:50:58 +0000 (0:00:01.318) 0:04:34.698 ***** 2026-02-28 00:58:46.483047 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483054 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483060 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483067 | orchestrator | 2026-02-28 00:58:46.483074 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.483081 | orchestrator | Saturday 28 February 2026 00:50:58 +0000 (0:00:00.380) 0:04:35.079 ***** 2026-02-28 00:58:46.483095 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483102 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483108 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483115 | orchestrator | 2026-02-28 00:58:46.483122 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.483129 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:00.467) 0:04:35.546 ***** 2026-02-28 00:58:46.483136 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483143 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483150 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483156 | orchestrator | 2026-02-28 00:58:46.483163 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.483170 | orchestrator | Saturday 28 February 2026 00:50:59 +0000 (0:00:00.896) 0:04:36.443 ***** 2026-02-28 00:58:46.483177 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483183 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483190 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483196 | orchestrator | 2026-02-28 00:58:46.483202 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.483209 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:01.228) 0:04:37.671 ***** 2026-02-28 00:58:46.483215 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483222 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483228 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483234 | orchestrator | 2026-02-28 00:58:46.483240 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.483247 | orchestrator | Saturday 28 February 2026 00:51:01 +0000 (0:00:00.538) 0:04:38.209 ***** 2026-02-28 00:58:46.483253 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483259 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483266 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483272 | orchestrator | 2026-02-28 00:58:46.483278 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.483285 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:00.547) 0:04:38.756 ***** 2026-02-28 00:58:46.483291 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483297 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483303 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483310 | orchestrator | 2026-02-28 00:58:46.483316 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.483323 | orchestrator | Saturday 28 February 2026 00:51:02 +0000 (0:00:00.548) 0:04:39.305 ***** 2026-02-28 00:58:46.483333 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483340 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483346 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483352 | orchestrator | 2026-02-28 00:58:46.483359 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.483365 | orchestrator | Saturday 28 February 2026 00:51:03 +0000 (0:00:00.855) 0:04:40.160 ***** 2026-02-28 00:58:46.483372 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483378 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483384 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483391 | orchestrator | 2026-02-28 00:58:46.483397 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.483404 | orchestrator | Saturday 28 February 2026 00:51:04 +0000 (0:00:00.648) 0:04:40.809 ***** 2026-02-28 00:58:46.483410 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483416 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483423 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483429 | orchestrator | 2026-02-28 00:58:46.483435 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.483442 | orchestrator | Saturday 28 February 2026 00:51:05 +0000 (0:00:00.834) 0:04:41.643 ***** 2026-02-28 00:58:46.483448 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483454 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.483461 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.483467 | orchestrator | 2026-02-28 00:58:46.483473 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.483479 | orchestrator | Saturday 28 February 2026 00:51:05 +0000 (0:00:00.466) 0:04:42.110 ***** 2026-02-28 00:58:46.483486 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483492 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483498 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483505 | orchestrator | 2026-02-28 00:58:46.483511 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.483517 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:00.672) 0:04:42.782 ***** 2026-02-28 00:58:46.483523 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483530 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483536 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483542 | orchestrator | 2026-02-28 00:58:46.483549 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.483555 | orchestrator | Saturday 28 February 2026 00:51:06 +0000 (0:00:00.386) 0:04:43.169 ***** 2026-02-28 00:58:46.483561 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483568 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483574 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483580 | orchestrator | 2026-02-28 00:58:46.483586 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:58:46.483592 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:00.784) 0:04:43.953 ***** 2026-02-28 00:58:46.483599 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483605 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483630 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483637 | orchestrator | 2026-02-28 00:58:46.483643 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-28 00:58:46.483653 | orchestrator | Saturday 28 February 2026 00:51:07 +0000 (0:00:00.415) 0:04:44.369 ***** 2026-02-28 00:58:46.483660 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.483667 | orchestrator | 2026-02-28 00:58:46.483673 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-28 00:58:46.483679 | orchestrator | Saturday 28 February 2026 00:51:08 +0000 (0:00:00.944) 0:04:45.314 ***** 2026-02-28 00:58:46.483686 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.483692 | orchestrator | 2026-02-28 00:58:46.483703 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-28 00:58:46.483718 | orchestrator | Saturday 28 February 2026 00:51:09 +0000 (0:00:00.250) 0:04:45.565 ***** 2026-02-28 00:58:46.483724 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-28 00:58:46.483731 | orchestrator | 2026-02-28 00:58:46.483737 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-28 00:58:46.483743 | orchestrator | Saturday 28 February 2026 00:51:10 +0000 (0:00:01.572) 0:04:47.138 ***** 2026-02-28 00:58:46.483750 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483756 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483762 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483768 | orchestrator | 2026-02-28 00:58:46.483775 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-28 00:58:46.483781 | orchestrator | Saturday 28 February 2026 00:51:11 +0000 (0:00:00.445) 0:04:47.584 ***** 2026-02-28 00:58:46.483787 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483794 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483800 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483806 | orchestrator | 2026-02-28 00:58:46.483812 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-28 00:58:46.483819 | orchestrator | Saturday 28 February 2026 00:51:11 +0000 (0:00:00.721) 0:04:48.305 ***** 2026-02-28 00:58:46.483825 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.483832 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.483838 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.483844 | orchestrator | 2026-02-28 00:58:46.483850 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-28 00:58:46.483857 | orchestrator | Saturday 28 February 2026 00:51:13 +0000 (0:00:01.637) 0:04:49.943 ***** 2026-02-28 00:58:46.483863 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.483869 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.483876 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.483882 | orchestrator | 2026-02-28 00:58:46.483888 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-28 00:58:46.483894 | orchestrator | Saturday 28 February 2026 00:51:14 +0000 (0:00:01.029) 0:04:50.972 ***** 2026-02-28 00:58:46.483901 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.483907 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.483913 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.483920 | orchestrator | 2026-02-28 00:58:46.483926 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-28 00:58:46.483932 | orchestrator | Saturday 28 February 2026 00:51:15 +0000 (0:00:00.875) 0:04:51.848 ***** 2026-02-28 00:58:46.483938 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.483945 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.483951 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.483957 | orchestrator | 2026-02-28 00:58:46.483964 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-28 00:58:46.483970 | orchestrator | Saturday 28 February 2026 00:51:16 +0000 (0:00:01.243) 0:04:53.092 ***** 2026-02-28 00:58:46.483976 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.483982 | orchestrator | 2026-02-28 00:58:46.483989 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-28 00:58:46.483995 | orchestrator | Saturday 28 February 2026 00:51:18 +0000 (0:00:01.774) 0:04:54.867 ***** 2026-02-28 00:58:46.484001 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.484007 | orchestrator | 2026-02-28 00:58:46.484014 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-28 00:58:46.484020 | orchestrator | Saturday 28 February 2026 00:51:20 +0000 (0:00:01.737) 0:04:56.604 ***** 2026-02-28 00:58:46.484026 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.484033 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-28 00:58:46.484039 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.484045 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 00:58:46.484057 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-28 00:58:46.484064 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 00:58:46.484070 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-28 00:58:46.484077 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 00:58:46.484083 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-02-28 00:58:46.484089 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 00:58:46.484096 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-28 00:58:46.484102 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-28 00:58:46.484108 | orchestrator | 2026-02-28 00:58:46.484115 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-28 00:58:46.484121 | orchestrator | Saturday 28 February 2026 00:51:26 +0000 (0:00:05.983) 0:05:02.587 ***** 2026-02-28 00:58:46.484127 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484134 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484140 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484146 | orchestrator | 2026-02-28 00:58:46.484152 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-28 00:58:46.484159 | orchestrator | Saturday 28 February 2026 00:51:27 +0000 (0:00:01.820) 0:05:04.408 ***** 2026-02-28 00:58:46.484165 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.484171 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.484177 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.484184 | orchestrator | 2026-02-28 00:58:46.484193 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-28 00:58:46.484200 | orchestrator | Saturday 28 February 2026 00:51:28 +0000 (0:00:00.599) 0:05:05.008 ***** 2026-02-28 00:58:46.484206 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.484212 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.484218 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.484225 | orchestrator | 2026-02-28 00:58:46.484231 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-28 00:58:46.484237 | orchestrator | Saturday 28 February 2026 00:51:29 +0000 (0:00:00.984) 0:05:05.993 ***** 2026-02-28 00:58:46.484247 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484254 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484260 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484266 | orchestrator | 2026-02-28 00:58:46.484272 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-28 00:58:46.484279 | orchestrator | Saturday 28 February 2026 00:51:32 +0000 (0:00:02.547) 0:05:08.541 ***** 2026-02-28 00:58:46.484285 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484291 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484297 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484304 | orchestrator | 2026-02-28 00:58:46.484310 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-28 00:58:46.484316 | orchestrator | Saturday 28 February 2026 00:51:33 +0000 (0:00:01.510) 0:05:10.051 ***** 2026-02-28 00:58:46.484322 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.484329 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.484335 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.484341 | orchestrator | 2026-02-28 00:58:46.484348 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-28 00:58:46.484354 | orchestrator | Saturday 28 February 2026 00:51:34 +0000 (0:00:00.418) 0:05:10.469 ***** 2026-02-28 00:58:46.484360 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.484366 | orchestrator | 2026-02-28 00:58:46.484373 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-28 00:58:46.484379 | orchestrator | Saturday 28 February 2026 00:51:35 +0000 (0:00:01.077) 0:05:11.547 ***** 2026-02-28 00:58:46.484390 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.484397 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.484403 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.484409 | orchestrator | 2026-02-28 00:58:46.484415 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-28 00:58:46.484421 | orchestrator | Saturday 28 February 2026 00:51:35 +0000 (0:00:00.830) 0:05:12.378 ***** 2026-02-28 00:58:46.484428 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.484434 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.484440 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.484447 | orchestrator | 2026-02-28 00:58:46.484453 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:46.484459 | orchestrator | Saturday 28 February 2026 00:51:36 +0000 (0:00:00.503) 0:05:12.881 ***** 2026-02-28 00:58:46.484465 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.484472 | orchestrator | 2026-02-28 00:58:46.484478 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-28 00:58:46.484484 | orchestrator | Saturday 28 February 2026 00:51:37 +0000 (0:00:01.378) 0:05:14.259 ***** 2026-02-28 00:58:46.484491 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484497 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484503 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484509 | orchestrator | 2026-02-28 00:58:46.484516 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-28 00:58:46.484522 | orchestrator | Saturday 28 February 2026 00:51:40 +0000 (0:00:02.643) 0:05:16.903 ***** 2026-02-28 00:58:46.484528 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484534 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484541 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484547 | orchestrator | 2026-02-28 00:58:46.484553 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-28 00:58:46.484559 | orchestrator | Saturday 28 February 2026 00:51:41 +0000 (0:00:01.291) 0:05:18.195 ***** 2026-02-28 00:58:46.484566 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484572 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484578 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484584 | orchestrator | 2026-02-28 00:58:46.484590 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-28 00:58:46.484597 | orchestrator | Saturday 28 February 2026 00:51:43 +0000 (0:00:01.783) 0:05:19.978 ***** 2026-02-28 00:58:46.484603 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.484648 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.484655 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.484662 | orchestrator | 2026-02-28 00:58:46.484668 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-28 00:58:46.484675 | orchestrator | Saturday 28 February 2026 00:51:46 +0000 (0:00:02.645) 0:05:22.624 ***** 2026-02-28 00:58:46.484681 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.484687 | orchestrator | 2026-02-28 00:58:46.484694 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-28 00:58:46.484700 | orchestrator | Saturday 28 February 2026 00:51:47 +0000 (0:00:00.945) 0:05:23.569 ***** 2026-02-28 00:58:46.484706 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-28 00:58:46.484712 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.484719 | orchestrator | 2026-02-28 00:58:46.484725 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-28 00:58:46.484731 | orchestrator | Saturday 28 February 2026 00:52:09 +0000 (0:00:22.082) 0:05:45.652 ***** 2026-02-28 00:58:46.484738 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.484744 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.484754 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.484765 | orchestrator | 2026-02-28 00:58:46.484772 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-28 00:58:46.484778 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:09.925) 0:05:55.577 ***** 2026-02-28 00:58:46.484784 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.484791 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.484797 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.484803 | orchestrator | 2026-02-28 00:58:46.484809 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-28 00:58:46.484820 | orchestrator | Saturday 28 February 2026 00:52:19 +0000 (0:00:00.347) 0:05:55.925 ***** 2026-02-28 00:58:46.484828 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-28 00:58:46.484835 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-28 00:58:46.484843 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-28 00:58:46.484851 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-28 00:58:46.484857 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-28 00:58:46.484865 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f6cf2726823dbc8a645cc8c800f144b058d38f39'}])  2026-02-28 00:58:46.484872 | orchestrator | 2026-02-28 00:58:46.484878 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:46.484885 | orchestrator | Saturday 28 February 2026 00:52:34 +0000 (0:00:14.963) 0:06:10.889 ***** 2026-02-28 00:58:46.484891 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.484897 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.484904 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.484910 | orchestrator | 2026-02-28 00:58:46.484916 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-28 00:58:46.484923 | orchestrator | Saturday 28 February 2026 00:52:35 +0000 (0:00:00.612) 0:06:11.501 ***** 2026-02-28 00:58:46.484929 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.484940 | orchestrator | 2026-02-28 00:58:46.484946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-28 00:58:46.484952 | orchestrator | Saturday 28 February 2026 00:52:35 +0000 (0:00:00.615) 0:06:12.116 ***** 2026-02-28 00:58:46.484958 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.484965 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.484971 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.484977 | orchestrator | 2026-02-28 00:58:46.484983 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-28 00:58:46.484990 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:00.329) 0:06:12.446 ***** 2026-02-28 00:58:46.484996 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485002 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485009 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485015 | orchestrator | 2026-02-28 00:58:46.485021 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-28 00:58:46.485028 | orchestrator | Saturday 28 February 2026 00:52:36 +0000 (0:00:00.313) 0:06:12.759 ***** 2026-02-28 00:58:46.485037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:46.485044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:46.485050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:46.485056 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485063 | orchestrator | 2026-02-28 00:58:46.485069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-28 00:58:46.485075 | orchestrator | Saturday 28 February 2026 00:52:37 +0000 (0:00:01.131) 0:06:13.890 ***** 2026-02-28 00:58:46.485081 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485092 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485098 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485104 | orchestrator | 2026-02-28 00:58:46.485111 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-28 00:58:46.485117 | orchestrator | 2026-02-28 00:58:46.485123 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.485130 | orchestrator | Saturday 28 February 2026 00:52:38 +0000 (0:00:00.560) 0:06:14.451 ***** 2026-02-28 00:58:46.485136 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.485142 | orchestrator | 2026-02-28 00:58:46.485149 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.485155 | orchestrator | Saturday 28 February 2026 00:52:38 +0000 (0:00:00.872) 0:06:15.323 ***** 2026-02-28 00:58:46.485161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.485168 | orchestrator | 2026-02-28 00:58:46.485174 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.485180 | orchestrator | Saturday 28 February 2026 00:52:39 +0000 (0:00:00.568) 0:06:15.891 ***** 2026-02-28 00:58:46.485186 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485191 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485197 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485202 | orchestrator | 2026-02-28 00:58:46.485208 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.485213 | orchestrator | Saturday 28 February 2026 00:52:40 +0000 (0:00:00.774) 0:06:16.666 ***** 2026-02-28 00:58:46.485218 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485224 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485230 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485235 | orchestrator | 2026-02-28 00:58:46.485240 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.485246 | orchestrator | Saturday 28 February 2026 00:52:40 +0000 (0:00:00.374) 0:06:17.041 ***** 2026-02-28 00:58:46.485252 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485261 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485266 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485272 | orchestrator | 2026-02-28 00:58:46.485277 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.485283 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:00.607) 0:06:17.649 ***** 2026-02-28 00:58:46.485289 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485294 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485299 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485305 | orchestrator | 2026-02-28 00:58:46.485310 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.485316 | orchestrator | Saturday 28 February 2026 00:52:41 +0000 (0:00:00.341) 0:06:17.991 ***** 2026-02-28 00:58:46.485321 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485327 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485332 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485338 | orchestrator | 2026-02-28 00:58:46.485343 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.485349 | orchestrator | Saturday 28 February 2026 00:52:42 +0000 (0:00:00.753) 0:06:18.744 ***** 2026-02-28 00:58:46.485354 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485360 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485365 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485371 | orchestrator | 2026-02-28 00:58:46.485376 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.485381 | orchestrator | Saturday 28 February 2026 00:52:42 +0000 (0:00:00.321) 0:06:19.065 ***** 2026-02-28 00:58:46.485387 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485393 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485398 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485403 | orchestrator | 2026-02-28 00:58:46.485409 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.485414 | orchestrator | Saturday 28 February 2026 00:52:43 +0000 (0:00:00.592) 0:06:19.658 ***** 2026-02-28 00:58:46.485420 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485425 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485431 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485436 | orchestrator | 2026-02-28 00:58:46.485442 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.485447 | orchestrator | Saturday 28 February 2026 00:52:43 +0000 (0:00:00.765) 0:06:20.423 ***** 2026-02-28 00:58:46.485453 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485458 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485464 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485469 | orchestrator | 2026-02-28 00:58:46.485475 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.485480 | orchestrator | Saturday 28 February 2026 00:52:44 +0000 (0:00:00.754) 0:06:21.178 ***** 2026-02-28 00:58:46.485486 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485491 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485497 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485502 | orchestrator | 2026-02-28 00:58:46.485508 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.485513 | orchestrator | Saturday 28 February 2026 00:52:45 +0000 (0:00:00.361) 0:06:21.540 ***** 2026-02-28 00:58:46.485519 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485524 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485533 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485539 | orchestrator | 2026-02-28 00:58:46.485544 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.485550 | orchestrator | Saturday 28 February 2026 00:52:45 +0000 (0:00:00.679) 0:06:22.220 ***** 2026-02-28 00:58:46.485555 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485561 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485566 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485575 | orchestrator | 2026-02-28 00:58:46.485581 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.485589 | orchestrator | Saturday 28 February 2026 00:52:46 +0000 (0:00:00.396) 0:06:22.616 ***** 2026-02-28 00:58:46.485595 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485601 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485606 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485630 | orchestrator | 2026-02-28 00:58:46.485640 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.485648 | orchestrator | Saturday 28 February 2026 00:52:46 +0000 (0:00:00.316) 0:06:22.932 ***** 2026-02-28 00:58:46.485657 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485666 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485675 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485684 | orchestrator | 2026-02-28 00:58:46.485693 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.485698 | orchestrator | Saturday 28 February 2026 00:52:46 +0000 (0:00:00.308) 0:06:23.241 ***** 2026-02-28 00:58:46.485704 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485709 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485715 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485720 | orchestrator | 2026-02-28 00:58:46.485726 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.485731 | orchestrator | Saturday 28 February 2026 00:52:47 +0000 (0:00:00.656) 0:06:23.898 ***** 2026-02-28 00:58:46.485737 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485742 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485748 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485753 | orchestrator | 2026-02-28 00:58:46.485759 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.485764 | orchestrator | Saturday 28 February 2026 00:52:47 +0000 (0:00:00.430) 0:06:24.329 ***** 2026-02-28 00:58:46.485770 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485775 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485781 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485786 | orchestrator | 2026-02-28 00:58:46.485792 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.485797 | orchestrator | Saturday 28 February 2026 00:52:48 +0000 (0:00:00.362) 0:06:24.692 ***** 2026-02-28 00:58:46.485803 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485808 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485814 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485819 | orchestrator | 2026-02-28 00:58:46.485824 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.485830 | orchestrator | Saturday 28 February 2026 00:52:48 +0000 (0:00:00.492) 0:06:25.184 ***** 2026-02-28 00:58:46.485835 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.485841 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.485846 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.485852 | orchestrator | 2026-02-28 00:58:46.485857 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:58:46.485863 | orchestrator | Saturday 28 February 2026 00:52:49 +0000 (0:00:00.895) 0:06:26.079 ***** 2026-02-28 00:58:46.485868 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:58:46.485874 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:46.485879 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:46.485885 | orchestrator | 2026-02-28 00:58:46.485890 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-28 00:58:46.485896 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:00.747) 0:06:26.827 ***** 2026-02-28 00:58:46.485901 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.485914 | orchestrator | 2026-02-28 00:58:46.485919 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-28 00:58:46.485925 | orchestrator | Saturday 28 February 2026 00:52:50 +0000 (0:00:00.581) 0:06:27.408 ***** 2026-02-28 00:58:46.485930 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.485936 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.485941 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.485947 | orchestrator | 2026-02-28 00:58:46.485953 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-28 00:58:46.485958 | orchestrator | Saturday 28 February 2026 00:52:51 +0000 (0:00:00.785) 0:06:28.194 ***** 2026-02-28 00:58:46.485964 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.485969 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.485975 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.485980 | orchestrator | 2026-02-28 00:58:46.485986 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-28 00:58:46.485991 | orchestrator | Saturday 28 February 2026 00:52:52 +0000 (0:00:00.651) 0:06:28.845 ***** 2026-02-28 00:58:46.485997 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:46.486003 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:46.486008 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:46.486014 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-28 00:58:46.486118 | orchestrator | 2026-02-28 00:58:46.486125 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-28 00:58:46.486130 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:10.609) 0:06:39.455 ***** 2026-02-28 00:58:46.486136 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.486141 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.486147 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.486152 | orchestrator | 2026-02-28 00:58:46.486162 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-28 00:58:46.486167 | orchestrator | Saturday 28 February 2026 00:53:03 +0000 (0:00:00.360) 0:06:39.815 ***** 2026-02-28 00:58:46.486173 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 00:58:46.486179 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 00:58:46.486184 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 00:58:46.486189 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-28 00:58:46.486195 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.486222 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.486229 | orchestrator | 2026-02-28 00:58:46.486234 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:46.486240 | orchestrator | Saturday 28 February 2026 00:53:05 +0000 (0:00:02.347) 0:06:42.163 ***** 2026-02-28 00:58:46.486246 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 00:58:46.486251 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 00:58:46.486257 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 00:58:46.486262 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 00:58:46.486268 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-28 00:58:46.486273 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-28 00:58:46.486278 | orchestrator | 2026-02-28 00:58:46.486284 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-28 00:58:46.486289 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:01.508) 0:06:43.672 ***** 2026-02-28 00:58:46.486295 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.486300 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.486306 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.486311 | orchestrator | 2026-02-28 00:58:46.486317 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-28 00:58:46.486323 | orchestrator | Saturday 28 February 2026 00:53:07 +0000 (0:00:00.691) 0:06:44.364 ***** 2026-02-28 00:58:46.486333 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.486339 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.486344 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.486350 | orchestrator | 2026-02-28 00:58:46.486355 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-28 00:58:46.486361 | orchestrator | Saturday 28 February 2026 00:53:08 +0000 (0:00:00.383) 0:06:44.748 ***** 2026-02-28 00:58:46.486366 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.486372 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.486377 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.486383 | orchestrator | 2026-02-28 00:58:46.486388 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-28 00:58:46.486394 | orchestrator | Saturday 28 February 2026 00:53:08 +0000 (0:00:00.316) 0:06:45.064 ***** 2026-02-28 00:58:46.486399 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.486405 | orchestrator | 2026-02-28 00:58:46.486410 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-28 00:58:46.486416 | orchestrator | Saturday 28 February 2026 00:53:09 +0000 (0:00:00.769) 0:06:45.833 ***** 2026-02-28 00:58:46.486421 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.486427 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.486432 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.486437 | orchestrator | 2026-02-28 00:58:46.486443 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-28 00:58:46.486449 | orchestrator | Saturday 28 February 2026 00:53:09 +0000 (0:00:00.338) 0:06:46.172 ***** 2026-02-28 00:58:46.486454 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.486459 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.486465 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.486470 | orchestrator | 2026-02-28 00:58:46.486476 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:46.486481 | orchestrator | Saturday 28 February 2026 00:53:10 +0000 (0:00:00.345) 0:06:46.518 ***** 2026-02-28 00:58:46.486487 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.486492 | orchestrator | 2026-02-28 00:58:46.486498 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-28 00:58:46.486503 | orchestrator | Saturday 28 February 2026 00:53:10 +0000 (0:00:00.783) 0:06:47.301 ***** 2026-02-28 00:58:46.486509 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.486514 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.486520 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.486525 | orchestrator | 2026-02-28 00:58:46.486530 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-28 00:58:46.486536 | orchestrator | Saturday 28 February 2026 00:53:12 +0000 (0:00:01.285) 0:06:48.586 ***** 2026-02-28 00:58:46.486541 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.486547 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.486552 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.486558 | orchestrator | 2026-02-28 00:58:46.486563 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-28 00:58:46.486569 | orchestrator | Saturday 28 February 2026 00:53:13 +0000 (0:00:01.198) 0:06:49.785 ***** 2026-02-28 00:58:46.486574 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.486580 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.486585 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.486591 | orchestrator | 2026-02-28 00:58:46.486596 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-28 00:58:46.486602 | orchestrator | Saturday 28 February 2026 00:53:15 +0000 (0:00:01.813) 0:06:51.599 ***** 2026-02-28 00:58:46.486619 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.486626 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.486635 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.486641 | orchestrator | 2026-02-28 00:58:46.486646 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-28 00:58:46.486655 | orchestrator | Saturday 28 February 2026 00:53:17 +0000 (0:00:02.424) 0:06:54.024 ***** 2026-02-28 00:58:46.486661 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.486667 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.486672 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-28 00:58:46.486678 | orchestrator | 2026-02-28 00:58:46.486683 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-28 00:58:46.486689 | orchestrator | Saturday 28 February 2026 00:53:17 +0000 (0:00:00.375) 0:06:54.399 ***** 2026-02-28 00:58:46.486712 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-28 00:58:46.486718 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-28 00:58:46.486724 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-28 00:58:46.486729 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-28 00:58:46.486735 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-28 00:58:46.486740 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.486746 | orchestrator | 2026-02-28 00:58:46.486752 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-28 00:58:46.486757 | orchestrator | Saturday 28 February 2026 00:53:48 +0000 (0:00:30.088) 0:07:24.488 ***** 2026-02-28 00:58:46.486763 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.486768 | orchestrator | 2026-02-28 00:58:46.486774 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-28 00:58:46.486779 | orchestrator | Saturday 28 February 2026 00:53:49 +0000 (0:00:01.307) 0:07:25.796 ***** 2026-02-28 00:58:46.486785 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.486790 | orchestrator | 2026-02-28 00:58:46.486796 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-28 00:58:46.486801 | orchestrator | Saturday 28 February 2026 00:53:49 +0000 (0:00:00.320) 0:07:26.117 ***** 2026-02-28 00:58:46.486807 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.486812 | orchestrator | 2026-02-28 00:58:46.486818 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-28 00:58:46.486823 | orchestrator | Saturday 28 February 2026 00:53:49 +0000 (0:00:00.159) 0:07:26.276 ***** 2026-02-28 00:58:46.486829 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-28 00:58:46.486834 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-28 00:58:46.486840 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-28 00:58:46.486845 | orchestrator | 2026-02-28 00:58:46.486851 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-28 00:58:46.486856 | orchestrator | Saturday 28 February 2026 00:53:56 +0000 (0:00:06.217) 0:07:32.493 ***** 2026-02-28 00:58:46.486862 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-28 00:58:46.486867 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-28 00:58:46.486873 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-28 00:58:46.486879 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-28 00:58:46.486884 | orchestrator | 2026-02-28 00:58:46.486890 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:46.486895 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:04.999) 0:07:37.493 ***** 2026-02-28 00:58:46.486904 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.486910 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.486916 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.486921 | orchestrator | 2026-02-28 00:58:46.486927 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-28 00:58:46.486932 | orchestrator | Saturday 28 February 2026 00:54:01 +0000 (0:00:00.608) 0:07:38.102 ***** 2026-02-28 00:58:46.486938 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.486943 | orchestrator | 2026-02-28 00:58:46.486949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-28 00:58:46.486954 | orchestrator | Saturday 28 February 2026 00:54:02 +0000 (0:00:00.647) 0:07:38.749 ***** 2026-02-28 00:58:46.486960 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.486965 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.486971 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.486976 | orchestrator | 2026-02-28 00:58:46.486982 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-28 00:58:46.486987 | orchestrator | Saturday 28 February 2026 00:54:02 +0000 (0:00:00.313) 0:07:39.062 ***** 2026-02-28 00:58:46.486993 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.486998 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.487004 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.487009 | orchestrator | 2026-02-28 00:58:46.487014 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-28 00:58:46.487020 | orchestrator | Saturday 28 February 2026 00:54:03 +0000 (0:00:01.038) 0:07:40.101 ***** 2026-02-28 00:58:46.487025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-28 00:58:46.487031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-28 00:58:46.487037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-28 00:58:46.487042 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.487048 | orchestrator | 2026-02-28 00:58:46.487053 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-28 00:58:46.487062 | orchestrator | Saturday 28 February 2026 00:54:04 +0000 (0:00:00.757) 0:07:40.859 ***** 2026-02-28 00:58:46.487068 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.487073 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.487079 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.487084 | orchestrator | 2026-02-28 00:58:46.487090 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-28 00:58:46.487095 | orchestrator | 2026-02-28 00:58:46.487101 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.487106 | orchestrator | Saturday 28 February 2026 00:54:05 +0000 (0:00:00.655) 0:07:41.514 ***** 2026-02-28 00:58:46.487128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.487134 | orchestrator | 2026-02-28 00:58:46.487140 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.487145 | orchestrator | Saturday 28 February 2026 00:54:05 +0000 (0:00:00.518) 0:07:42.033 ***** 2026-02-28 00:58:46.487151 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.487157 | orchestrator | 2026-02-28 00:58:46.487162 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.487168 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:00.653) 0:07:42.687 ***** 2026-02-28 00:58:46.487173 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487179 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487184 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487190 | orchestrator | 2026-02-28 00:58:46.487195 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.487205 | orchestrator | Saturday 28 February 2026 00:54:06 +0000 (0:00:00.329) 0:07:43.016 ***** 2026-02-28 00:58:46.487210 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487216 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487221 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487227 | orchestrator | 2026-02-28 00:58:46.487232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.487238 | orchestrator | Saturday 28 February 2026 00:54:07 +0000 (0:00:00.745) 0:07:43.761 ***** 2026-02-28 00:58:46.487243 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487249 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487254 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487259 | orchestrator | 2026-02-28 00:58:46.487265 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.487271 | orchestrator | Saturday 28 February 2026 00:54:07 +0000 (0:00:00.655) 0:07:44.417 ***** 2026-02-28 00:58:46.487276 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487282 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487287 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487293 | orchestrator | 2026-02-28 00:58:46.487298 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.487304 | orchestrator | Saturday 28 February 2026 00:54:08 +0000 (0:00:00.912) 0:07:45.329 ***** 2026-02-28 00:58:46.487309 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487315 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487320 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487326 | orchestrator | 2026-02-28 00:58:46.487331 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.487337 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:00.281) 0:07:45.611 ***** 2026-02-28 00:58:46.487342 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487348 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487353 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487359 | orchestrator | 2026-02-28 00:58:46.487364 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.487370 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:00.267) 0:07:45.879 ***** 2026-02-28 00:58:46.487375 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487381 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487387 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487392 | orchestrator | 2026-02-28 00:58:46.487397 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.487403 | orchestrator | Saturday 28 February 2026 00:54:09 +0000 (0:00:00.269) 0:07:46.149 ***** 2026-02-28 00:58:46.487408 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487414 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487419 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487425 | orchestrator | 2026-02-28 00:58:46.487430 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.487436 | orchestrator | Saturday 28 February 2026 00:54:10 +0000 (0:00:00.960) 0:07:47.110 ***** 2026-02-28 00:58:46.487441 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487447 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487452 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487458 | orchestrator | 2026-02-28 00:58:46.487463 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.487469 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:00.655) 0:07:47.765 ***** 2026-02-28 00:58:46.487474 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487480 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487485 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487491 | orchestrator | 2026-02-28 00:58:46.487496 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.487501 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:00.275) 0:07:48.041 ***** 2026-02-28 00:58:46.487507 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487517 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487522 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487527 | orchestrator | 2026-02-28 00:58:46.487533 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.487538 | orchestrator | Saturday 28 February 2026 00:54:11 +0000 (0:00:00.284) 0:07:48.326 ***** 2026-02-28 00:58:46.487544 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487549 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487555 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487560 | orchestrator | 2026-02-28 00:58:46.487569 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.487575 | orchestrator | Saturday 28 February 2026 00:54:12 +0000 (0:00:00.470) 0:07:48.796 ***** 2026-02-28 00:58:46.487580 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487586 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487591 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487597 | orchestrator | 2026-02-28 00:58:46.487602 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.487622 | orchestrator | Saturday 28 February 2026 00:54:12 +0000 (0:00:00.281) 0:07:49.077 ***** 2026-02-28 00:58:46.487628 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487634 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487642 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487647 | orchestrator | 2026-02-28 00:58:46.487653 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.487659 | orchestrator | Saturday 28 February 2026 00:54:12 +0000 (0:00:00.296) 0:07:49.374 ***** 2026-02-28 00:58:46.487664 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487670 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487675 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487681 | orchestrator | 2026-02-28 00:58:46.487686 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.487692 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:00.288) 0:07:49.662 ***** 2026-02-28 00:58:46.487697 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487703 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487708 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487714 | orchestrator | 2026-02-28 00:58:46.487719 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.487725 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:00.484) 0:07:50.147 ***** 2026-02-28 00:58:46.487730 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487736 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487741 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487746 | orchestrator | 2026-02-28 00:58:46.487752 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.487758 | orchestrator | Saturday 28 February 2026 00:54:13 +0000 (0:00:00.286) 0:07:50.434 ***** 2026-02-28 00:58:46.487763 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487768 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487774 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487779 | orchestrator | 2026-02-28 00:58:46.487785 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.487790 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:00.366) 0:07:50.801 ***** 2026-02-28 00:58:46.487796 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487801 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487807 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487812 | orchestrator | 2026-02-28 00:58:46.487818 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-28 00:58:46.487823 | orchestrator | Saturday 28 February 2026 00:54:14 +0000 (0:00:00.643) 0:07:51.444 ***** 2026-02-28 00:58:46.487828 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487834 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487839 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.487849 | orchestrator | 2026-02-28 00:58:46.487855 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-28 00:58:46.487861 | orchestrator | Saturday 28 February 2026 00:54:15 +0000 (0:00:00.303) 0:07:51.748 ***** 2026-02-28 00:58:46.487866 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 00:58:46.487872 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 00:58:46.487877 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 00:58:46.487883 | orchestrator | 2026-02-28 00:58:46.487888 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-28 00:58:46.487894 | orchestrator | Saturday 28 February 2026 00:54:15 +0000 (0:00:00.591) 0:07:52.339 ***** 2026-02-28 00:58:46.487899 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.487905 | orchestrator | 2026-02-28 00:58:46.487910 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-28 00:58:46.487916 | orchestrator | Saturday 28 February 2026 00:54:16 +0000 (0:00:00.520) 0:07:52.860 ***** 2026-02-28 00:58:46.487921 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487927 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487932 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487937 | orchestrator | 2026-02-28 00:58:46.487943 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-28 00:58:46.487948 | orchestrator | Saturday 28 February 2026 00:54:16 +0000 (0:00:00.469) 0:07:53.330 ***** 2026-02-28 00:58:46.487954 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.487959 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.487965 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.487970 | orchestrator | 2026-02-28 00:58:46.487976 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-28 00:58:46.487981 | orchestrator | Saturday 28 February 2026 00:54:17 +0000 (0:00:00.284) 0:07:53.615 ***** 2026-02-28 00:58:46.487987 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.487992 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.487998 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.488003 | orchestrator | 2026-02-28 00:58:46.488008 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-28 00:58:46.488014 | orchestrator | Saturday 28 February 2026 00:54:17 +0000 (0:00:00.631) 0:07:54.246 ***** 2026-02-28 00:58:46.488019 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.488025 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.488030 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.488036 | orchestrator | 2026-02-28 00:58:46.488041 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-28 00:58:46.488047 | orchestrator | Saturday 28 February 2026 00:54:18 +0000 (0:00:00.294) 0:07:54.541 ***** 2026-02-28 00:58:46.488055 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:58:46.488061 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:58:46.488066 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-28 00:58:46.488072 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:58:46.488077 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:58:46.488086 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-28 00:58:46.488092 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:58:46.488097 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:58:46.488103 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:58:46.488112 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-28 00:58:46.488118 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:58:46.488124 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:58:46.488129 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-28 00:58:46.488135 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:58:46.488140 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-28 00:58:46.488145 | orchestrator | 2026-02-28 00:58:46.488151 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-28 00:58:46.488156 | orchestrator | Saturday 28 February 2026 00:54:20 +0000 (0:00:02.216) 0:07:56.757 ***** 2026-02-28 00:58:46.488162 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.488168 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.488173 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.488179 | orchestrator | 2026-02-28 00:58:46.488184 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-28 00:58:46.488190 | orchestrator | Saturday 28 February 2026 00:54:20 +0000 (0:00:00.299) 0:07:57.057 ***** 2026-02-28 00:58:46.488195 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.488201 | orchestrator | 2026-02-28 00:58:46.488206 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-28 00:58:46.488211 | orchestrator | Saturday 28 February 2026 00:54:21 +0000 (0:00:00.463) 0:07:57.521 ***** 2026-02-28 00:58:46.488217 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:58:46.488222 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:58:46.488228 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-28 00:58:46.488233 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-28 00:58:46.488239 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-28 00:58:46.488244 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-28 00:58:46.488250 | orchestrator | 2026-02-28 00:58:46.488255 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-28 00:58:46.488261 | orchestrator | Saturday 28 February 2026 00:54:22 +0000 (0:00:01.171) 0:07:58.692 ***** 2026-02-28 00:58:46.488266 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.488272 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:46.488277 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:46.488283 | orchestrator | 2026-02-28 00:58:46.488288 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:46.488294 | orchestrator | Saturday 28 February 2026 00:54:24 +0000 (0:00:01.970) 0:08:00.663 ***** 2026-02-28 00:58:46.488299 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:46.488305 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:46.488310 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.488316 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:46.488321 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:58:46.488327 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.488333 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:46.488338 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:58:46.488344 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.488349 | orchestrator | 2026-02-28 00:58:46.488355 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-28 00:58:46.488364 | orchestrator | Saturday 28 February 2026 00:54:25 +0000 (0:00:01.084) 0:08:01.747 ***** 2026-02-28 00:58:46.488370 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.488375 | orchestrator | 2026-02-28 00:58:46.488381 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-28 00:58:46.488386 | orchestrator | Saturday 28 February 2026 00:54:27 +0000 (0:00:02.060) 0:08:03.808 ***** 2026-02-28 00:58:46.488392 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.488397 | orchestrator | 2026-02-28 00:58:46.488403 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-28 00:58:46.488411 | orchestrator | Saturday 28 February 2026 00:54:28 +0000 (0:00:00.676) 0:08:04.484 ***** 2026-02-28 00:58:46.488417 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e2365387-977d-5b6c-ac86-7516065bddb2', 'data_vg': 'ceph-e2365387-977d-5b6c-ac86-7516065bddb2'}) 2026-02-28 00:58:46.488423 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e', 'data_vg': 'ceph-4eb2c6f9-5e6f-5ebf-87cf-ca4fabb96f6e'}) 2026-02-28 00:58:46.488431 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4e9a8b5b-9130-5945-a817-2135e2f57de8', 'data_vg': 'ceph-4e9a8b5b-9130-5945-a817-2135e2f57de8'}) 2026-02-28 00:58:46.488437 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c221fe87-4514-5691-85ae-4cf2e32a6a79', 'data_vg': 'ceph-c221fe87-4514-5691-85ae-4cf2e32a6a79'}) 2026-02-28 00:58:46.488442 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-160cc444-1ede-5c9f-8076-16a146e97f10', 'data_vg': 'ceph-160cc444-1ede-5c9f-8076-16a146e97f10'}) 2026-02-28 00:58:46.488448 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d8e79be-6c7a-5031-8b8d-1755de447a00', 'data_vg': 'ceph-4d8e79be-6c7a-5031-8b8d-1755de447a00'}) 2026-02-28 00:58:46.488453 | orchestrator | 2026-02-28 00:58:46.488459 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-28 00:58:46.488464 | orchestrator | Saturday 28 February 2026 00:55:13 +0000 (0:00:45.278) 0:08:49.763 ***** 2026-02-28 00:58:46.488470 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.488475 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.488481 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.488486 | orchestrator | 2026-02-28 00:58:46.488492 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-28 00:58:46.488497 | orchestrator | Saturday 28 February 2026 00:55:13 +0000 (0:00:00.339) 0:08:50.102 ***** 2026-02-28 00:58:46.488503 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.488508 | orchestrator | 2026-02-28 00:58:46.488514 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-28 00:58:46.488519 | orchestrator | Saturday 28 February 2026 00:55:14 +0000 (0:00:00.862) 0:08:50.965 ***** 2026-02-28 00:58:46.488525 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.488530 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.488536 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.488541 | orchestrator | 2026-02-28 00:58:46.488547 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-28 00:58:46.488553 | orchestrator | Saturday 28 February 2026 00:55:15 +0000 (0:00:00.666) 0:08:51.631 ***** 2026-02-28 00:58:46.488558 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.488564 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.488569 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.488574 | orchestrator | 2026-02-28 00:58:46.488580 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:46.488585 | orchestrator | Saturday 28 February 2026 00:55:17 +0000 (0:00:02.632) 0:08:54.263 ***** 2026-02-28 00:58:46.488591 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.488602 | orchestrator | 2026-02-28 00:58:46.488638 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-28 00:58:46.488645 | orchestrator | Saturday 28 February 2026 00:55:18 +0000 (0:00:00.799) 0:08:55.063 ***** 2026-02-28 00:58:46.488651 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.488656 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.488662 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.488667 | orchestrator | 2026-02-28 00:58:46.488673 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-28 00:58:46.488678 | orchestrator | Saturday 28 February 2026 00:55:19 +0000 (0:00:01.263) 0:08:56.326 ***** 2026-02-28 00:58:46.488684 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.488689 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.488695 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.488700 | orchestrator | 2026-02-28 00:58:46.488706 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-28 00:58:46.488711 | orchestrator | Saturday 28 February 2026 00:55:21 +0000 (0:00:01.272) 0:08:57.598 ***** 2026-02-28 00:58:46.488717 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.488722 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.488728 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.488733 | orchestrator | 2026-02-28 00:58:46.488738 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-28 00:58:46.488744 | orchestrator | Saturday 28 February 2026 00:55:23 +0000 (0:00:02.026) 0:08:59.625 ***** 2026-02-28 00:58:46.488749 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.488755 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.488760 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.488766 | orchestrator | 2026-02-28 00:58:46.488771 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-28 00:58:46.488777 | orchestrator | Saturday 28 February 2026 00:55:23 +0000 (0:00:00.650) 0:09:00.276 ***** 2026-02-28 00:58:46.488782 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.488788 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.488793 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.488798 | orchestrator | 2026-02-28 00:58:46.488804 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-28 00:58:46.488809 | orchestrator | Saturday 28 February 2026 00:55:24 +0000 (0:00:00.396) 0:09:00.673 ***** 2026-02-28 00:58:46.488815 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-28 00:58:46.488820 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-28 00:58:46.488826 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-28 00:58:46.488831 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-28 00:58:46.488837 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 00:58:46.488845 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-28 00:58:46.488851 | orchestrator | 2026-02-28 00:58:46.488856 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-28 00:58:46.488862 | orchestrator | Saturday 28 February 2026 00:55:25 +0000 (0:00:01.202) 0:09:01.875 ***** 2026-02-28 00:58:46.488867 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-28 00:58:46.488873 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-28 00:58:46.488878 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-28 00:58:46.488884 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-28 00:58:46.488892 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-28 00:58:46.488897 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-28 00:58:46.488903 | orchestrator | 2026-02-28 00:58:46.488908 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-28 00:58:46.488914 | orchestrator | Saturday 28 February 2026 00:55:27 +0000 (0:00:02.230) 0:09:04.106 ***** 2026-02-28 00:58:46.488919 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-28 00:58:46.488925 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-28 00:58:46.488930 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-28 00:58:46.488939 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-28 00:58:46.488945 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-28 00:58:46.488950 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-28 00:58:46.488956 | orchestrator | 2026-02-28 00:58:46.488961 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-28 00:58:46.488966 | orchestrator | Saturday 28 February 2026 00:55:31 +0000 (0:00:04.136) 0:09:08.243 ***** 2026-02-28 00:58:46.488972 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.488977 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.488983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.488988 | orchestrator | 2026-02-28 00:58:46.488994 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-28 00:58:46.488999 | orchestrator | Saturday 28 February 2026 00:55:34 +0000 (0:00:02.657) 0:09:10.900 ***** 2026-02-28 00:58:46.489005 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489010 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489015 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-28 00:58:46.489021 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.489026 | orchestrator | 2026-02-28 00:58:46.489032 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-28 00:58:46.489037 | orchestrator | Saturday 28 February 2026 00:55:47 +0000 (0:00:12.820) 0:09:23.721 ***** 2026-02-28 00:58:46.489043 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489048 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489054 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489059 | orchestrator | 2026-02-28 00:58:46.489065 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:46.489070 | orchestrator | Saturday 28 February 2026 00:55:48 +0000 (0:00:01.168) 0:09:24.889 ***** 2026-02-28 00:58:46.489075 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489081 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489086 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489091 | orchestrator | 2026-02-28 00:58:46.489097 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-28 00:58:46.489102 | orchestrator | Saturday 28 February 2026 00:55:48 +0000 (0:00:00.427) 0:09:25.317 ***** 2026-02-28 00:58:46.489108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.489113 | orchestrator | 2026-02-28 00:58:46.489119 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-28 00:58:46.489124 | orchestrator | Saturday 28 February 2026 00:55:49 +0000 (0:00:00.867) 0:09:26.184 ***** 2026-02-28 00:58:46.489130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.489135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.489140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.489146 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489151 | orchestrator | 2026-02-28 00:58:46.489156 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-28 00:58:46.489160 | orchestrator | Saturday 28 February 2026 00:55:50 +0000 (0:00:00.448) 0:09:26.632 ***** 2026-02-28 00:58:46.489165 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489170 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489175 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489180 | orchestrator | 2026-02-28 00:58:46.489185 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-28 00:58:46.489189 | orchestrator | Saturday 28 February 2026 00:55:50 +0000 (0:00:00.369) 0:09:27.002 ***** 2026-02-28 00:58:46.489194 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489199 | orchestrator | 2026-02-28 00:58:46.489207 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-28 00:58:46.489212 | orchestrator | Saturday 28 February 2026 00:55:50 +0000 (0:00:00.265) 0:09:27.268 ***** 2026-02-28 00:58:46.489217 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489222 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489227 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489232 | orchestrator | 2026-02-28 00:58:46.489236 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-28 00:58:46.489241 | orchestrator | Saturday 28 February 2026 00:55:51 +0000 (0:00:00.340) 0:09:27.608 ***** 2026-02-28 00:58:46.489246 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489251 | orchestrator | 2026-02-28 00:58:46.489256 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-28 00:58:46.489261 | orchestrator | Saturday 28 February 2026 00:55:51 +0000 (0:00:00.245) 0:09:27.854 ***** 2026-02-28 00:58:46.489265 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489270 | orchestrator | 2026-02-28 00:58:46.489278 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-28 00:58:46.489283 | orchestrator | Saturday 28 February 2026 00:55:51 +0000 (0:00:00.251) 0:09:28.105 ***** 2026-02-28 00:58:46.489288 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489293 | orchestrator | 2026-02-28 00:58:46.489298 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-28 00:58:46.489303 | orchestrator | Saturday 28 February 2026 00:55:51 +0000 (0:00:00.135) 0:09:28.241 ***** 2026-02-28 00:58:46.489307 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489312 | orchestrator | 2026-02-28 00:58:46.489319 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-28 00:58:46.489324 | orchestrator | Saturday 28 February 2026 00:55:52 +0000 (0:00:01.035) 0:09:29.277 ***** 2026-02-28 00:58:46.489329 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489334 | orchestrator | 2026-02-28 00:58:46.489339 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-28 00:58:46.489343 | orchestrator | Saturday 28 February 2026 00:55:53 +0000 (0:00:00.424) 0:09:29.701 ***** 2026-02-28 00:58:46.489348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.489353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.489358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.489363 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489368 | orchestrator | 2026-02-28 00:58:46.489373 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-28 00:58:46.489378 | orchestrator | Saturday 28 February 2026 00:55:53 +0000 (0:00:00.468) 0:09:30.170 ***** 2026-02-28 00:58:46.489383 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489388 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489392 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489397 | orchestrator | 2026-02-28 00:58:46.489402 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-28 00:58:46.489407 | orchestrator | Saturday 28 February 2026 00:55:54 +0000 (0:00:00.600) 0:09:30.771 ***** 2026-02-28 00:58:46.489412 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489416 | orchestrator | 2026-02-28 00:58:46.489421 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-28 00:58:46.489426 | orchestrator | Saturday 28 February 2026 00:55:54 +0000 (0:00:00.243) 0:09:31.014 ***** 2026-02-28 00:58:46.489431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489436 | orchestrator | 2026-02-28 00:58:46.489441 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-28 00:58:46.489445 | orchestrator | 2026-02-28 00:58:46.489450 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.489455 | orchestrator | Saturday 28 February 2026 00:55:55 +0000 (0:00:01.002) 0:09:32.017 ***** 2026-02-28 00:58:46.489460 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.489472 | orchestrator | 2026-02-28 00:58:46.489480 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.489489 | orchestrator | Saturday 28 February 2026 00:55:56 +0000 (0:00:01.427) 0:09:33.444 ***** 2026-02-28 00:58:46.489496 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.489504 | orchestrator | 2026-02-28 00:58:46.489511 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.489519 | orchestrator | Saturday 28 February 2026 00:55:58 +0000 (0:00:01.335) 0:09:34.780 ***** 2026-02-28 00:58:46.489526 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489534 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489542 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489549 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.489556 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.489562 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.489569 | orchestrator | 2026-02-28 00:58:46.489577 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.489585 | orchestrator | Saturday 28 February 2026 00:55:59 +0000 (0:00:01.090) 0:09:35.871 ***** 2026-02-28 00:58:46.489592 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.489600 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.489619 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.489628 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.489636 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.489644 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.489651 | orchestrator | 2026-02-28 00:58:46.489660 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.489668 | orchestrator | Saturday 28 February 2026 00:56:00 +0000 (0:00:01.018) 0:09:36.889 ***** 2026-02-28 00:58:46.489676 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.489684 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.489692 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.489699 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.489704 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.489709 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.489713 | orchestrator | 2026-02-28 00:58:46.489718 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.489723 | orchestrator | Saturday 28 February 2026 00:56:01 +0000 (0:00:00.776) 0:09:37.666 ***** 2026-02-28 00:58:46.489728 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.489733 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.489738 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.489743 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.489747 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.489752 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.489757 | orchestrator | 2026-02-28 00:58:46.489762 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.489767 | orchestrator | Saturday 28 February 2026 00:56:01 +0000 (0:00:00.741) 0:09:38.408 ***** 2026-02-28 00:58:46.489776 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489781 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489786 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489791 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.489795 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.489800 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.489805 | orchestrator | 2026-02-28 00:58:46.489810 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.489815 | orchestrator | Saturday 28 February 2026 00:56:03 +0000 (0:00:01.376) 0:09:39.785 ***** 2026-02-28 00:58:46.489820 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489830 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489838 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489843 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.489848 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.489853 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.489858 | orchestrator | 2026-02-28 00:58:46.489863 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.489868 | orchestrator | Saturday 28 February 2026 00:56:03 +0000 (0:00:00.639) 0:09:40.424 ***** 2026-02-28 00:58:46.489873 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.489877 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.489882 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.489887 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.489892 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.489897 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.489901 | orchestrator | 2026-02-28 00:58:46.489906 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.489911 | orchestrator | Saturday 28 February 2026 00:56:04 +0000 (0:00:00.968) 0:09:41.392 ***** 2026-02-28 00:58:46.489916 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.489921 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.489926 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.489931 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.489936 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.489940 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.489945 | orchestrator | 2026-02-28 00:58:46.489950 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.489955 | orchestrator | Saturday 28 February 2026 00:56:06 +0000 (0:00:01.076) 0:09:42.469 ***** 2026-02-28 00:58:46.489960 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.489965 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.489970 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.489974 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.489979 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.489984 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.489989 | orchestrator | 2026-02-28 00:58:46.489994 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.489999 | orchestrator | Saturday 28 February 2026 00:56:07 +0000 (0:00:01.395) 0:09:43.864 ***** 2026-02-28 00:58:46.490004 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.490008 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.490013 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.490051 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.490056 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.490061 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.490066 | orchestrator | 2026-02-28 00:58:46.490071 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.490076 | orchestrator | Saturday 28 February 2026 00:56:08 +0000 (0:00:00.622) 0:09:44.487 ***** 2026-02-28 00:58:46.490080 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.490085 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.490090 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.490095 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490100 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.490104 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.490109 | orchestrator | 2026-02-28 00:58:46.490114 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.490119 | orchestrator | Saturday 28 February 2026 00:56:08 +0000 (0:00:00.937) 0:09:45.424 ***** 2026-02-28 00:58:46.490124 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490128 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490133 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490138 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.490143 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.490152 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.490157 | orchestrator | 2026-02-28 00:58:46.490162 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.490166 | orchestrator | Saturday 28 February 2026 00:56:09 +0000 (0:00:00.695) 0:09:46.119 ***** 2026-02-28 00:58:46.490171 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490176 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490181 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490186 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.490190 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.490195 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.490200 | orchestrator | 2026-02-28 00:58:46.490205 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.490210 | orchestrator | Saturday 28 February 2026 00:56:10 +0000 (0:00:00.995) 0:09:47.115 ***** 2026-02-28 00:58:46.490214 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490219 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490224 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490229 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.490233 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.490238 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.490243 | orchestrator | 2026-02-28 00:58:46.490248 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.490253 | orchestrator | Saturday 28 February 2026 00:56:11 +0000 (0:00:00.687) 0:09:47.802 ***** 2026-02-28 00:58:46.490257 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.490262 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.490267 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.490272 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.490276 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.490281 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.490286 | orchestrator | 2026-02-28 00:58:46.490291 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.490299 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:00.918) 0:09:48.720 ***** 2026-02-28 00:58:46.490304 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.490309 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.490314 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.490318 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:58:46.490323 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:58:46.490328 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:58:46.490333 | orchestrator | 2026-02-28 00:58:46.490338 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.490342 | orchestrator | Saturday 28 February 2026 00:56:12 +0000 (0:00:00.702) 0:09:49.423 ***** 2026-02-28 00:58:46.490351 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.490356 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.490360 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.490365 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490370 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.490375 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.490380 | orchestrator | 2026-02-28 00:58:46.490385 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.490389 | orchestrator | Saturday 28 February 2026 00:56:13 +0000 (0:00:01.004) 0:09:50.427 ***** 2026-02-28 00:58:46.490394 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490399 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490404 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490409 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490414 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.490418 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.490423 | orchestrator | 2026-02-28 00:58:46.490428 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.490433 | orchestrator | Saturday 28 February 2026 00:56:14 +0000 (0:00:00.733) 0:09:51.161 ***** 2026-02-28 00:58:46.490441 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490446 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490451 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490456 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490461 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.490465 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.490470 | orchestrator | 2026-02-28 00:58:46.490475 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-28 00:58:46.490480 | orchestrator | Saturday 28 February 2026 00:56:16 +0000 (0:00:01.462) 0:09:52.624 ***** 2026-02-28 00:58:46.490485 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.490490 | orchestrator | 2026-02-28 00:58:46.490495 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-28 00:58:46.490499 | orchestrator | Saturday 28 February 2026 00:56:20 +0000 (0:00:04.123) 0:09:56.747 ***** 2026-02-28 00:58:46.490504 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.490509 | orchestrator | 2026-02-28 00:58:46.490514 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-28 00:58:46.490519 | orchestrator | Saturday 28 February 2026 00:56:22 +0000 (0:00:02.065) 0:09:58.813 ***** 2026-02-28 00:58:46.490524 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.490528 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.490533 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.490538 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490543 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.490548 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.490553 | orchestrator | 2026-02-28 00:58:46.490557 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-28 00:58:46.490562 | orchestrator | Saturday 28 February 2026 00:56:24 +0000 (0:00:01.965) 0:10:00.779 ***** 2026-02-28 00:58:46.490567 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.490572 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.490577 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.490582 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.490587 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.490591 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.490596 | orchestrator | 2026-02-28 00:58:46.490601 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-28 00:58:46.490606 | orchestrator | Saturday 28 February 2026 00:56:25 +0000 (0:00:01.030) 0:10:01.809 ***** 2026-02-28 00:58:46.490642 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.490648 | orchestrator | 2026-02-28 00:58:46.490653 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-28 00:58:46.490658 | orchestrator | Saturday 28 February 2026 00:56:26 +0000 (0:00:01.372) 0:10:03.182 ***** 2026-02-28 00:58:46.490663 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.490668 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.490673 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.490678 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.490683 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.490688 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.490693 | orchestrator | 2026-02-28 00:58:46.490698 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-28 00:58:46.490703 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:02.022) 0:10:05.204 ***** 2026-02-28 00:58:46.490708 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.490713 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.490718 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.490723 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.490727 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.490732 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.490741 | orchestrator | 2026-02-28 00:58:46.490746 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-28 00:58:46.490751 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:03.803) 0:10:09.008 ***** 2026-02-28 00:58:46.490756 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:58:46.490761 | orchestrator | 2026-02-28 00:58:46.490766 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-28 00:58:46.490774 | orchestrator | Saturday 28 February 2026 00:56:34 +0000 (0:00:01.442) 0:10:10.451 ***** 2026-02-28 00:58:46.490779 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490784 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490789 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490794 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490799 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.490804 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.490809 | orchestrator | 2026-02-28 00:58:46.490814 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-28 00:58:46.490819 | orchestrator | Saturday 28 February 2026 00:56:35 +0000 (0:00:01.046) 0:10:11.498 ***** 2026-02-28 00:58:46.490826 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.490831 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.490836 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.490841 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:58:46.490846 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:58:46.490851 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:58:46.490856 | orchestrator | 2026-02-28 00:58:46.490861 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-28 00:58:46.490866 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:02.792) 0:10:14.290 ***** 2026-02-28 00:58:46.490871 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.490876 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.490881 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.490886 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:58:46.490891 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:58:46.490896 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:58:46.490901 | orchestrator | 2026-02-28 00:58:46.490906 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-28 00:58:46.490911 | orchestrator | 2026-02-28 00:58:46.490916 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.490921 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:01.286) 0:10:15.577 ***** 2026-02-28 00:58:46.490926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.490930 | orchestrator | 2026-02-28 00:58:46.490936 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.490940 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:00.583) 0:10:16.160 ***** 2026-02-28 00:58:46.490945 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.490950 | orchestrator | 2026-02-28 00:58:46.490955 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.490960 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:00.843) 0:10:17.003 ***** 2026-02-28 00:58:46.490965 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.490970 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.490975 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.490980 | orchestrator | 2026-02-28 00:58:46.490985 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.490990 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:00.317) 0:10:17.321 ***** 2026-02-28 00:58:46.490995 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491003 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491008 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491013 | orchestrator | 2026-02-28 00:58:46.491017 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.491022 | orchestrator | Saturday 28 February 2026 00:56:41 +0000 (0:00:00.786) 0:10:18.108 ***** 2026-02-28 00:58:46.491027 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491032 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491037 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491042 | orchestrator | 2026-02-28 00:58:46.491047 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.491052 | orchestrator | Saturday 28 February 2026 00:56:42 +0000 (0:00:01.125) 0:10:19.233 ***** 2026-02-28 00:58:46.491057 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491062 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491067 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491072 | orchestrator | 2026-02-28 00:58:46.491076 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.491081 | orchestrator | Saturday 28 February 2026 00:56:43 +0000 (0:00:00.763) 0:10:19.996 ***** 2026-02-28 00:58:46.491086 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491091 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491096 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491101 | orchestrator | 2026-02-28 00:58:46.491106 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.491111 | orchestrator | Saturday 28 February 2026 00:56:43 +0000 (0:00:00.353) 0:10:20.349 ***** 2026-02-28 00:58:46.491116 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491121 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491126 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491131 | orchestrator | 2026-02-28 00:58:46.491136 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.491141 | orchestrator | Saturday 28 February 2026 00:56:44 +0000 (0:00:00.374) 0:10:20.724 ***** 2026-02-28 00:58:46.491146 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491150 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491155 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491160 | orchestrator | 2026-02-28 00:58:46.491165 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.491170 | orchestrator | Saturday 28 February 2026 00:56:44 +0000 (0:00:00.598) 0:10:21.323 ***** 2026-02-28 00:58:46.491175 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491180 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491184 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491189 | orchestrator | 2026-02-28 00:58:46.491193 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.491198 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:00.838) 0:10:22.161 ***** 2026-02-28 00:58:46.491203 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491207 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491212 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491216 | orchestrator | 2026-02-28 00:58:46.491224 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.491229 | orchestrator | Saturday 28 February 2026 00:56:46 +0000 (0:00:00.792) 0:10:22.954 ***** 2026-02-28 00:58:46.491233 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491238 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491243 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491247 | orchestrator | 2026-02-28 00:58:46.491252 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.491256 | orchestrator | Saturday 28 February 2026 00:56:47 +0000 (0:00:00.599) 0:10:23.553 ***** 2026-02-28 00:58:46.491263 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491268 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491273 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491282 | orchestrator | 2026-02-28 00:58:46.491287 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.491292 | orchestrator | Saturday 28 February 2026 00:56:47 +0000 (0:00:00.711) 0:10:24.264 ***** 2026-02-28 00:58:46.491296 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491301 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491306 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491310 | orchestrator | 2026-02-28 00:58:46.491315 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.491320 | orchestrator | Saturday 28 February 2026 00:56:48 +0000 (0:00:00.457) 0:10:24.722 ***** 2026-02-28 00:58:46.491324 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491329 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491334 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491338 | orchestrator | 2026-02-28 00:58:46.491343 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.491347 | orchestrator | Saturday 28 February 2026 00:56:48 +0000 (0:00:00.458) 0:10:25.181 ***** 2026-02-28 00:58:46.491352 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491357 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491361 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491366 | orchestrator | 2026-02-28 00:58:46.491371 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.491376 | orchestrator | Saturday 28 February 2026 00:56:49 +0000 (0:00:00.425) 0:10:25.606 ***** 2026-02-28 00:58:46.491380 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491385 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491390 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491394 | orchestrator | 2026-02-28 00:58:46.491399 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.491404 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.943) 0:10:26.550 ***** 2026-02-28 00:58:46.491408 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491413 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491418 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491422 | orchestrator | 2026-02-28 00:58:46.491427 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.491432 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.430) 0:10:26.980 ***** 2026-02-28 00:58:46.491436 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491441 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491446 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491450 | orchestrator | 2026-02-28 00:58:46.491455 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.491460 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.338) 0:10:27.318 ***** 2026-02-28 00:58:46.491464 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491469 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491474 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491478 | orchestrator | 2026-02-28 00:58:46.491483 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.491488 | orchestrator | Saturday 28 February 2026 00:56:51 +0000 (0:00:00.404) 0:10:27.723 ***** 2026-02-28 00:58:46.491493 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.491497 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.491502 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.491506 | orchestrator | 2026-02-28 00:58:46.491511 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-28 00:58:46.491516 | orchestrator | Saturday 28 February 2026 00:56:52 +0000 (0:00:01.257) 0:10:28.980 ***** 2026-02-28 00:58:46.491520 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491525 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491530 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-28 00:58:46.491535 | orchestrator | 2026-02-28 00:58:46.491539 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-28 00:58:46.491548 | orchestrator | Saturday 28 February 2026 00:56:52 +0000 (0:00:00.435) 0:10:29.416 ***** 2026-02-28 00:58:46.491553 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.491558 | orchestrator | 2026-02-28 00:58:46.491562 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-28 00:58:46.491567 | orchestrator | Saturday 28 February 2026 00:56:55 +0000 (0:00:02.423) 0:10:31.840 ***** 2026-02-28 00:58:46.491574 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-28 00:58:46.491580 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491585 | orchestrator | 2026-02-28 00:58:46.491590 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-28 00:58:46.491594 | orchestrator | Saturday 28 February 2026 00:56:56 +0000 (0:00:00.726) 0:10:32.566 ***** 2026-02-28 00:58:46.491604 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 00:58:46.491626 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 00:58:46.491632 | orchestrator | 2026-02-28 00:58:46.491636 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-28 00:58:46.491644 | orchestrator | Saturday 28 February 2026 00:57:04 +0000 (0:00:08.565) 0:10:41.132 ***** 2026-02-28 00:58:46.491648 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-28 00:58:46.491653 | orchestrator | 2026-02-28 00:58:46.491658 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-28 00:58:46.491662 | orchestrator | Saturday 28 February 2026 00:57:07 +0000 (0:00:03.262) 0:10:44.394 ***** 2026-02-28 00:58:46.491667 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.491672 | orchestrator | 2026-02-28 00:58:46.491676 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-28 00:58:46.491681 | orchestrator | Saturday 28 February 2026 00:57:08 +0000 (0:00:00.599) 0:10:44.993 ***** 2026-02-28 00:58:46.491685 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:58:46.491690 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:58:46.491695 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-28 00:58:46.491699 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-28 00:58:46.491704 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-28 00:58:46.491709 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-28 00:58:46.491713 | orchestrator | 2026-02-28 00:58:46.491718 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-28 00:58:46.491722 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:01.088) 0:10:46.082 ***** 2026-02-28 00:58:46.491727 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.491732 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:46.491736 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:46.491741 | orchestrator | 2026-02-28 00:58:46.491745 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:46.491750 | orchestrator | Saturday 28 February 2026 00:57:12 +0000 (0:00:02.476) 0:10:48.558 ***** 2026-02-28 00:58:46.491759 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:46.491763 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:46.491768 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.491773 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:46.491777 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:58:46.491782 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.491786 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:46.491791 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:58:46.491796 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.491800 | orchestrator | 2026-02-28 00:58:46.491805 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-28 00:58:46.491809 | orchestrator | Saturday 28 February 2026 00:57:13 +0000 (0:00:01.254) 0:10:49.812 ***** 2026-02-28 00:58:46.491814 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.491818 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.491823 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.491828 | orchestrator | 2026-02-28 00:58:46.491832 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-28 00:58:46.491837 | orchestrator | Saturday 28 February 2026 00:57:16 +0000 (0:00:02.677) 0:10:52.490 ***** 2026-02-28 00:58:46.491841 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.491846 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.491850 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.491855 | orchestrator | 2026-02-28 00:58:46.491860 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-28 00:58:46.491864 | orchestrator | Saturday 28 February 2026 00:57:16 +0000 (0:00:00.279) 0:10:52.770 ***** 2026-02-28 00:58:46.491869 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.491873 | orchestrator | 2026-02-28 00:58:46.491878 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-28 00:58:46.491883 | orchestrator | Saturday 28 February 2026 00:57:17 +0000 (0:00:00.682) 0:10:53.452 ***** 2026-02-28 00:58:46.491887 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.491892 | orchestrator | 2026-02-28 00:58:46.491896 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-28 00:58:46.491901 | orchestrator | Saturday 28 February 2026 00:57:17 +0000 (0:00:00.503) 0:10:53.956 ***** 2026-02-28 00:58:46.491905 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.491910 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.491914 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.491919 | orchestrator | 2026-02-28 00:58:46.491924 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-28 00:58:46.491928 | orchestrator | Saturday 28 February 2026 00:57:18 +0000 (0:00:01.087) 0:10:55.043 ***** 2026-02-28 00:58:46.491933 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.491937 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.491942 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.491947 | orchestrator | 2026-02-28 00:58:46.491954 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-28 00:58:46.491958 | orchestrator | Saturday 28 February 2026 00:57:19 +0000 (0:00:01.238) 0:10:56.281 ***** 2026-02-28 00:58:46.491963 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.491968 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.491972 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.491977 | orchestrator | 2026-02-28 00:58:46.491981 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-28 00:58:46.491986 | orchestrator | Saturday 28 February 2026 00:57:21 +0000 (0:00:01.792) 0:10:58.074 ***** 2026-02-28 00:58:46.491990 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.492001 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.492005 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.492010 | orchestrator | 2026-02-28 00:58:46.492015 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-28 00:58:46.492019 | orchestrator | Saturday 28 February 2026 00:57:23 +0000 (0:00:01.974) 0:11:00.048 ***** 2026-02-28 00:58:46.492024 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492028 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492033 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492037 | orchestrator | 2026-02-28 00:58:46.492042 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:46.492046 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:01.594) 0:11:01.643 ***** 2026-02-28 00:58:46.492051 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.492055 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.492060 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.492064 | orchestrator | 2026-02-28 00:58:46.492069 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-28 00:58:46.492074 | orchestrator | Saturday 28 February 2026 00:57:25 +0000 (0:00:00.701) 0:11:02.344 ***** 2026-02-28 00:58:46.492078 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.492083 | orchestrator | 2026-02-28 00:58:46.492087 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-28 00:58:46.492092 | orchestrator | Saturday 28 February 2026 00:57:26 +0000 (0:00:00.875) 0:11:03.220 ***** 2026-02-28 00:58:46.492096 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492101 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492106 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492110 | orchestrator | 2026-02-28 00:58:46.492115 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-28 00:58:46.492119 | orchestrator | Saturday 28 February 2026 00:57:27 +0000 (0:00:00.371) 0:11:03.591 ***** 2026-02-28 00:58:46.492124 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.492128 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.492133 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.492137 | orchestrator | 2026-02-28 00:58:46.492142 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-28 00:58:46.492147 | orchestrator | Saturday 28 February 2026 00:57:28 +0000 (0:00:01.238) 0:11:04.830 ***** 2026-02-28 00:58:46.492151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.492156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.492160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.492165 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492170 | orchestrator | 2026-02-28 00:58:46.492174 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-28 00:58:46.492179 | orchestrator | Saturday 28 February 2026 00:57:29 +0000 (0:00:00.971) 0:11:05.801 ***** 2026-02-28 00:58:46.492183 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492188 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492193 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492197 | orchestrator | 2026-02-28 00:58:46.492202 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-28 00:58:46.492206 | orchestrator | 2026-02-28 00:58:46.492211 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-28 00:58:46.492215 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:00.926) 0:11:06.727 ***** 2026-02-28 00:58:46.492220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.492225 | orchestrator | 2026-02-28 00:58:46.492229 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-28 00:58:46.492234 | orchestrator | Saturday 28 February 2026 00:57:30 +0000 (0:00:00.588) 0:11:07.316 ***** 2026-02-28 00:58:46.492242 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.492246 | orchestrator | 2026-02-28 00:58:46.492251 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-28 00:58:46.492256 | orchestrator | Saturday 28 February 2026 00:57:31 +0000 (0:00:00.806) 0:11:08.123 ***** 2026-02-28 00:58:46.492260 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492265 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492269 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492274 | orchestrator | 2026-02-28 00:58:46.492278 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-28 00:58:46.492283 | orchestrator | Saturday 28 February 2026 00:57:31 +0000 (0:00:00.317) 0:11:08.440 ***** 2026-02-28 00:58:46.492288 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492292 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492297 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492301 | orchestrator | 2026-02-28 00:58:46.492306 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-28 00:58:46.492311 | orchestrator | Saturday 28 February 2026 00:57:32 +0000 (0:00:00.726) 0:11:09.166 ***** 2026-02-28 00:58:46.492315 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492320 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492324 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492329 | orchestrator | 2026-02-28 00:58:46.492333 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-28 00:58:46.492341 | orchestrator | Saturday 28 February 2026 00:57:33 +0000 (0:00:01.002) 0:11:10.168 ***** 2026-02-28 00:58:46.492345 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492350 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492355 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492359 | orchestrator | 2026-02-28 00:58:46.492364 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-28 00:58:46.492368 | orchestrator | Saturday 28 February 2026 00:57:34 +0000 (0:00:00.772) 0:11:10.941 ***** 2026-02-28 00:58:46.492373 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492378 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492382 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492387 | orchestrator | 2026-02-28 00:58:46.492394 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-28 00:58:46.492398 | orchestrator | Saturday 28 February 2026 00:57:34 +0000 (0:00:00.333) 0:11:11.274 ***** 2026-02-28 00:58:46.492403 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492408 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492412 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492417 | orchestrator | 2026-02-28 00:58:46.492421 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-28 00:58:46.492426 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:00.334) 0:11:11.609 ***** 2026-02-28 00:58:46.492431 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492435 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492440 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492444 | orchestrator | 2026-02-28 00:58:46.492449 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-28 00:58:46.492454 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:00.688) 0:11:12.298 ***** 2026-02-28 00:58:46.492458 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492463 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492467 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492472 | orchestrator | 2026-02-28 00:58:46.492476 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-28 00:58:46.492481 | orchestrator | Saturday 28 February 2026 00:57:36 +0000 (0:00:00.771) 0:11:13.069 ***** 2026-02-28 00:58:46.492486 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492490 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492498 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492502 | orchestrator | 2026-02-28 00:58:46.492507 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-28 00:58:46.492512 | orchestrator | Saturday 28 February 2026 00:57:37 +0000 (0:00:00.751) 0:11:13.820 ***** 2026-02-28 00:58:46.492516 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492521 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492525 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492530 | orchestrator | 2026-02-28 00:58:46.492535 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-28 00:58:46.492539 | orchestrator | Saturday 28 February 2026 00:57:37 +0000 (0:00:00.405) 0:11:14.226 ***** 2026-02-28 00:58:46.492544 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492548 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492553 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492557 | orchestrator | 2026-02-28 00:58:46.492562 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-28 00:58:46.492567 | orchestrator | Saturday 28 February 2026 00:57:38 +0000 (0:00:00.690) 0:11:14.916 ***** 2026-02-28 00:58:46.492571 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492576 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492580 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492585 | orchestrator | 2026-02-28 00:58:46.492589 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-28 00:58:46.492594 | orchestrator | Saturday 28 February 2026 00:57:38 +0000 (0:00:00.367) 0:11:15.284 ***** 2026-02-28 00:58:46.492598 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492603 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492619 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492624 | orchestrator | 2026-02-28 00:58:46.492628 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-28 00:58:46.492633 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:00.376) 0:11:15.661 ***** 2026-02-28 00:58:46.492638 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492642 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492647 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492651 | orchestrator | 2026-02-28 00:58:46.492656 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-28 00:58:46.492661 | orchestrator | Saturday 28 February 2026 00:57:39 +0000 (0:00:00.391) 0:11:16.052 ***** 2026-02-28 00:58:46.492667 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492674 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492681 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492689 | orchestrator | 2026-02-28 00:58:46.492696 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-28 00:58:46.492703 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:00.638) 0:11:16.691 ***** 2026-02-28 00:58:46.492711 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492719 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492727 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492732 | orchestrator | 2026-02-28 00:58:46.492737 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-28 00:58:46.492741 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:00.391) 0:11:17.082 ***** 2026-02-28 00:58:46.492746 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492751 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492755 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492760 | orchestrator | 2026-02-28 00:58:46.492764 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-28 00:58:46.492769 | orchestrator | Saturday 28 February 2026 00:57:40 +0000 (0:00:00.343) 0:11:17.425 ***** 2026-02-28 00:58:46.492774 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492778 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492783 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492787 | orchestrator | 2026-02-28 00:58:46.492796 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-28 00:58:46.492801 | orchestrator | Saturday 28 February 2026 00:57:41 +0000 (0:00:00.364) 0:11:17.790 ***** 2026-02-28 00:58:46.492806 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.492813 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.492818 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.492823 | orchestrator | 2026-02-28 00:58:46.492827 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-28 00:58:46.492832 | orchestrator | Saturday 28 February 2026 00:57:42 +0000 (0:00:00.860) 0:11:18.650 ***** 2026-02-28 00:58:46.492836 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.492841 | orchestrator | 2026-02-28 00:58:46.492846 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-28 00:58:46.492854 | orchestrator | Saturday 28 February 2026 00:57:42 +0000 (0:00:00.603) 0:11:19.254 ***** 2026-02-28 00:58:46.492859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.492864 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:46.492868 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:46.492873 | orchestrator | 2026-02-28 00:58:46.492877 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:46.492882 | orchestrator | Saturday 28 February 2026 00:57:45 +0000 (0:00:02.312) 0:11:21.566 ***** 2026-02-28 00:58:46.492887 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:46.492891 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-28 00:58:46.492896 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.492901 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:46.492905 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-28 00:58:46.492910 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.492915 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:46.492919 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-28 00:58:46.492924 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.492929 | orchestrator | 2026-02-28 00:58:46.492933 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-28 00:58:46.492938 | orchestrator | Saturday 28 February 2026 00:57:46 +0000 (0:00:01.617) 0:11:23.184 ***** 2026-02-28 00:58:46.492943 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.492947 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.492952 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.492956 | orchestrator | 2026-02-28 00:58:46.492961 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-28 00:58:46.492966 | orchestrator | Saturday 28 February 2026 00:57:47 +0000 (0:00:00.366) 0:11:23.551 ***** 2026-02-28 00:58:46.492970 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.492975 | orchestrator | 2026-02-28 00:58:46.492980 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-28 00:58:46.492984 | orchestrator | Saturday 28 February 2026 00:57:47 +0000 (0:00:00.650) 0:11:24.201 ***** 2026-02-28 00:58:46.492989 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.492994 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.492999 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.493003 | orchestrator | 2026-02-28 00:58:46.493008 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-28 00:58:46.493016 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:01.586) 0:11:25.787 ***** 2026-02-28 00:58:46.493020 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.493026 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:58:46.493034 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.493042 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:58:46.493049 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.493057 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-28 00:58:46.493065 | orchestrator | 2026-02-28 00:58:46.493072 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-28 00:58:46.493079 | orchestrator | Saturday 28 February 2026 00:57:54 +0000 (0:00:05.212) 0:11:31.000 ***** 2026-02-28 00:58:46.493086 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.493093 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:46.493100 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.493107 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:46.493114 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 00:58:46.493122 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 00:58:46.493129 | orchestrator | 2026-02-28 00:58:46.493137 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-28 00:58:46.493148 | orchestrator | Saturday 28 February 2026 00:57:56 +0000 (0:00:02.427) 0:11:33.428 ***** 2026-02-28 00:58:46.493156 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 00:58:46.493163 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.493171 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 00:58:46.493179 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.493186 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 00:58:46.493194 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.493201 | orchestrator | 2026-02-28 00:58:46.493209 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-28 00:58:46.493222 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:01.605) 0:11:35.033 ***** 2026-02-28 00:58:46.493230 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-28 00:58:46.493238 | orchestrator | 2026-02-28 00:58:46.493245 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-28 00:58:46.493253 | orchestrator | Saturday 28 February 2026 00:57:58 +0000 (0:00:00.259) 0:11:35.292 ***** 2026-02-28 00:58:46.493260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493298 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493306 | orchestrator | 2026-02-28 00:58:46.493320 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-28 00:58:46.493327 | orchestrator | Saturday 28 February 2026 00:58:00 +0000 (0:00:01.592) 0:11:36.885 ***** 2026-02-28 00:58:46.493335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-28 00:58:46.493375 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493383 | orchestrator | 2026-02-28 00:58:46.493390 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-28 00:58:46.493398 | orchestrator | Saturday 28 February 2026 00:58:01 +0000 (0:00:01.158) 0:11:38.043 ***** 2026-02-28 00:58:46.493406 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:46.493414 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:46.493421 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:46.493429 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:46.493437 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-28 00:58:46.493445 | orchestrator | 2026-02-28 00:58:46.493452 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-28 00:58:46.493460 | orchestrator | Saturday 28 February 2026 00:58:30 +0000 (0:00:29.389) 0:12:07.433 ***** 2026-02-28 00:58:46.493468 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493475 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.493483 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.493491 | orchestrator | 2026-02-28 00:58:46.493499 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-28 00:58:46.493506 | orchestrator | Saturday 28 February 2026 00:58:31 +0000 (0:00:00.365) 0:12:07.799 ***** 2026-02-28 00:58:46.493514 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493522 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.493530 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.493537 | orchestrator | 2026-02-28 00:58:46.493545 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-28 00:58:46.493553 | orchestrator | Saturday 28 February 2026 00:58:31 +0000 (0:00:00.348) 0:12:08.147 ***** 2026-02-28 00:58:46.493566 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.493574 | orchestrator | 2026-02-28 00:58:46.493581 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-28 00:58:46.493588 | orchestrator | Saturday 28 February 2026 00:58:32 +0000 (0:00:00.910) 0:12:09.058 ***** 2026-02-28 00:58:46.493596 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.493602 | orchestrator | 2026-02-28 00:58:46.493658 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-28 00:58:46.493668 | orchestrator | Saturday 28 February 2026 00:58:33 +0000 (0:00:00.630) 0:12:09.689 ***** 2026-02-28 00:58:46.493675 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.493686 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.493694 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.493702 | orchestrator | 2026-02-28 00:58:46.493709 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-28 00:58:46.493717 | orchestrator | Saturday 28 February 2026 00:58:34 +0000 (0:00:01.327) 0:12:11.016 ***** 2026-02-28 00:58:46.493725 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.493732 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.493740 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.493747 | orchestrator | 2026-02-28 00:58:46.493756 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-28 00:58:46.493761 | orchestrator | Saturday 28 February 2026 00:58:36 +0000 (0:00:01.637) 0:12:12.654 ***** 2026-02-28 00:58:46.493766 | orchestrator | changed: [testbed-node-3] 2026-02-28 00:58:46.493770 | orchestrator | changed: [testbed-node-4] 2026-02-28 00:58:46.493775 | orchestrator | changed: [testbed-node-5] 2026-02-28 00:58:46.493779 | orchestrator | 2026-02-28 00:58:46.493784 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-28 00:58:46.493789 | orchestrator | Saturday 28 February 2026 00:58:38 +0000 (0:00:01.996) 0:12:14.651 ***** 2026-02-28 00:58:46.493793 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.493798 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.493803 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-28 00:58:46.493807 | orchestrator | 2026-02-28 00:58:46.493812 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-28 00:58:46.493816 | orchestrator | Saturday 28 February 2026 00:58:41 +0000 (0:00:02.887) 0:12:17.538 ***** 2026-02-28 00:58:46.493821 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493826 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.493830 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.493835 | orchestrator | 2026-02-28 00:58:46.493839 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-28 00:58:46.493844 | orchestrator | Saturday 28 February 2026 00:58:41 +0000 (0:00:00.404) 0:12:17.942 ***** 2026-02-28 00:58:46.493849 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 00:58:46.493853 | orchestrator | 2026-02-28 00:58:46.493858 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-28 00:58:46.493863 | orchestrator | Saturday 28 February 2026 00:58:42 +0000 (0:00:00.685) 0:12:18.628 ***** 2026-02-28 00:58:46.493867 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.493872 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.493876 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.493881 | orchestrator | 2026-02-28 00:58:46.493886 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-28 00:58:46.493890 | orchestrator | Saturday 28 February 2026 00:58:42 +0000 (0:00:00.752) 0:12:19.381 ***** 2026-02-28 00:58:46.493895 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493899 | orchestrator | skipping: [testbed-node-4] 2026-02-28 00:58:46.493904 | orchestrator | skipping: [testbed-node-5] 2026-02-28 00:58:46.493908 | orchestrator | 2026-02-28 00:58:46.493913 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-28 00:58:46.493918 | orchestrator | Saturday 28 February 2026 00:58:43 +0000 (0:00:00.387) 0:12:19.768 ***** 2026-02-28 00:58:46.493922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 00:58:46.493931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 00:58:46.493936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 00:58:46.493940 | orchestrator | skipping: [testbed-node-3] 2026-02-28 00:58:46.493945 | orchestrator | 2026-02-28 00:58:46.493950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-28 00:58:46.493954 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.750) 0:12:20.519 ***** 2026-02-28 00:58:46.493959 | orchestrator | ok: [testbed-node-3] 2026-02-28 00:58:46.493964 | orchestrator | ok: [testbed-node-4] 2026-02-28 00:58:46.493968 | orchestrator | ok: [testbed-node-5] 2026-02-28 00:58:46.493973 | orchestrator | 2026-02-28 00:58:46.493977 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:58:46.493982 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-28 00:58:46.493987 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-28 00:58:46.493995 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-28 00:58:46.494000 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-28 00:58:46.494005 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-28 00:58:46.494013 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-28 00:58:46.494038 | orchestrator | 2026-02-28 00:58:46.494043 | orchestrator | 2026-02-28 00:58:46.494048 | orchestrator | 2026-02-28 00:58:46.494052 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:58:46.494057 | orchestrator | Saturday 28 February 2026 00:58:44 +0000 (0:00:00.282) 0:12:20.802 ***** 2026-02-28 00:58:46.494062 | orchestrator | =============================================================================== 2026-02-28 00:58:46.494066 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 49.49s 2026-02-28 00:58:46.494071 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.28s 2026-02-28 00:58:46.494075 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.09s 2026-02-28 00:58:46.494079 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.39s 2026-02-28 00:58:46.494083 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.08s 2026-02-28 00:58:46.494087 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.96s 2026-02-28 00:58:46.494091 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.82s 2026-02-28 00:58:46.494095 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.61s 2026-02-28 00:58:46.494100 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.93s 2026-02-28 00:58:46.494104 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.57s 2026-02-28 00:58:46.494108 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.40s 2026-02-28 00:58:46.494112 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.22s 2026-02-28 00:58:46.494116 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 5.98s 2026-02-28 00:58:46.494120 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.21s 2026-02-28 00:58:46.494124 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.00s 2026-02-28 00:58:46.494129 | orchestrator | ceph-facts : Set_fact current_fsid rc 1 --------------------------------- 4.69s 2026-02-28 00:58:46.494137 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.68s 2026-02-28 00:58:46.494142 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.14s 2026-02-28 00:58:46.494146 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.12s 2026-02-28 00:58:46.494150 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.80s 2026-02-28 00:58:46.494154 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:58:46.494158 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:46.494163 | orchestrator | 2026-02-28 00:58:46 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:46.494167 | orchestrator | 2026-02-28 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:49.513213 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:58:49.515325 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:49.517579 | orchestrator | 2026-02-28 00:58:49 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:49.517775 | orchestrator | 2026-02-28 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:52.561929 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:58:52.562865 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:52.564674 | orchestrator | 2026-02-28 00:58:52 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:52.564717 | orchestrator | 2026-02-28 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:55.598550 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:58:55.600201 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:55.601708 | orchestrator | 2026-02-28 00:58:55 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:55.602059 | orchestrator | 2026-02-28 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:58:58.650829 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:58:58.653221 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:58:58.655229 | orchestrator | 2026-02-28 00:58:58 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:58:58.655286 | orchestrator | 2026-02-28 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:01.700568 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:01.702277 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:01.705738 | orchestrator | 2026-02-28 00:59:01 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:01.705804 | orchestrator | 2026-02-28 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:04.752500 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:04.753321 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:04.754228 | orchestrator | 2026-02-28 00:59:04 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:04.754280 | orchestrator | 2026-02-28 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:07.799148 | orchestrator | 2026-02-28 00:59:07 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:07.801569 | orchestrator | 2026-02-28 00:59:07 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:07.803834 | orchestrator | 2026-02-28 00:59:07 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:07.804498 | orchestrator | 2026-02-28 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:10.851008 | orchestrator | 2026-02-28 00:59:10 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:10.852165 | orchestrator | 2026-02-28 00:59:10 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:10.853439 | orchestrator | 2026-02-28 00:59:10 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:10.853577 | orchestrator | 2026-02-28 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:13.890697 | orchestrator | 2026-02-28 00:59:13 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:13.895360 | orchestrator | 2026-02-28 00:59:13 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:13.896511 | orchestrator | 2026-02-28 00:59:13 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:13.896548 | orchestrator | 2026-02-28 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:16.954126 | orchestrator | 2026-02-28 00:59:16 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:16.955768 | orchestrator | 2026-02-28 00:59:16 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:16.958005 | orchestrator | 2026-02-28 00:59:16 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:16.958052 | orchestrator | 2026-02-28 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:20.014301 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:20.015498 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:20.016813 | orchestrator | 2026-02-28 00:59:20 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state STARTED 2026-02-28 00:59:20.018354 | orchestrator | 2026-02-28 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:23.070733 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:23.071291 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:23.074442 | orchestrator | 2026-02-28 00:59:23 | INFO  | Task 05b22c01-1353-490d-b31c-c12dfeb51265 is in state SUCCESS 2026-02-28 00:59:23.074509 | orchestrator | 2026-02-28 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:23.075726 | orchestrator | 2026-02-28 00:59:23.075909 | orchestrator | 2026-02-28 00:59:23.075923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:59:23.075931 | orchestrator | 2026-02-28 00:59:23.075939 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:59:23.075967 | orchestrator | Saturday 28 February 2026 00:56:27 +0000 (0:00:00.334) 0:00:00.334 ***** 2026-02-28 00:59:23.075975 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:23.075983 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:23.075990 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:23.075997 | orchestrator | 2026-02-28 00:59:23.076004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:59:23.076011 | orchestrator | Saturday 28 February 2026 00:56:27 +0000 (0:00:00.396) 0:00:00.731 ***** 2026-02-28 00:59:23.076019 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-28 00:59:23.076026 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-28 00:59:23.076033 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-28 00:59:23.076040 | orchestrator | 2026-02-28 00:59:23.076046 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-28 00:59:23.076053 | orchestrator | 2026-02-28 00:59:23.076061 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:23.076068 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:00.530) 0:00:01.261 ***** 2026-02-28 00:59:23.076075 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:23.076082 | orchestrator | 2026-02-28 00:59:23.076089 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-28 00:59:23.076096 | orchestrator | Saturday 28 February 2026 00:56:28 +0000 (0:00:00.513) 0:00:01.775 ***** 2026-02-28 00:59:23.076120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:59:23.076128 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:59:23.076135 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-28 00:59:23.076142 | orchestrator | 2026-02-28 00:59:23.076149 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-28 00:59:23.076156 | orchestrator | Saturday 28 February 2026 00:56:29 +0000 (0:00:00.817) 0:00:02.593 ***** 2026-02-28 00:59:23.076165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076256 | orchestrator | 2026-02-28 00:59:23.076264 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:23.076278 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:01.915) 0:00:04.508 ***** 2026-02-28 00:59:23.076286 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:23.076293 | orchestrator | 2026-02-28 00:59:23.076300 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-28 00:59:23.076308 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:00.557) 0:00:05.066 ***** 2026-02-28 00:59:23.076328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076397 | orchestrator | 2026-02-28 00:59:23.076404 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-28 00:59:23.076413 | orchestrator | Saturday 28 February 2026 00:56:35 +0000 (0:00:02.878) 0:00:07.944 ***** 2026-02-28 00:59:23.076421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:23.076429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:23.076442 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:23.076454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:23.076469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:23.076480 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:23.076491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:23.076501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:23.076529 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:23.076538 | orchestrator | 2026-02-28 00:59:23.076547 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-28 00:59:23.076556 | orchestrator | Saturday 28 February 2026 00:56:36 +0000 (0:00:01.787) 0:00:09.731 ***** 2026-02-28 00:59:23.076565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:23.076582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:23.076592 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:23.076664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:23.076679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:23.076695 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:23.076705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-28 00:59:23.076727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-28 00:59:23.076737 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:23.076745 | orchestrator | 2026-02-28 00:59:23.076754 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-28 00:59:23.076762 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:00.979) 0:00:10.711 ***** 2026-02-28 00:59:23.076771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.076859 | orchestrator | 2026-02-28 00:59:23.076868 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-28 00:59:23.076876 | orchestrator | Saturday 28 February 2026 00:56:40 +0000 (0:00:02.787) 0:00:13.498 ***** 2026-02-28 00:59:23.076884 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:23.076891 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:23.076899 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:23.076906 | orchestrator | 2026-02-28 00:59:23.076913 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-28 00:59:23.076921 | orchestrator | Saturday 28 February 2026 00:56:44 +0000 (0:00:03.497) 0:00:16.995 ***** 2026-02-28 00:59:23.076929 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:23.076937 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:23.076945 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:23.076953 | orchestrator | 2026-02-28 00:59:23.076960 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-28 00:59:23.076968 | orchestrator | Saturday 28 February 2026 00:56:46 +0000 (0:00:02.345) 0:00:19.341 ***** 2026-02-28 00:59:23.076976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.076992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.077001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-28 00:59:23.077015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.077024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.077042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-28 00:59:23.077052 | orchestrator | 2026-02-28 00:59:23.077060 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:23.077068 | orchestrator | Saturday 28 February 2026 00:56:48 +0000 (0:00:02.414) 0:00:21.756 ***** 2026-02-28 00:59:23.077075 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:23.077083 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:23.077091 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:23.077098 | orchestrator | 2026-02-28 00:59:23.077106 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 00:59:23.077114 | orchestrator | Saturday 28 February 2026 00:56:49 +0000 (0:00:00.511) 0:00:22.268 ***** 2026-02-28 00:59:23.077126 | orchestrator | 2026-02-28 00:59:23.077134 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 00:59:23.077142 | orchestrator | Saturday 28 February 2026 00:56:49 +0000 (0:00:00.142) 0:00:22.410 ***** 2026-02-28 00:59:23.077149 | orchestrator | 2026-02-28 00:59:23.077157 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-28 00:59:23.077165 | orchestrator | Saturday 28 February 2026 00:56:49 +0000 (0:00:00.151) 0:00:22.562 ***** 2026-02-28 00:59:23.077172 | orchestrator | 2026-02-28 00:59:23.077180 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-28 00:59:23.077188 | orchestrator | Saturday 28 February 2026 00:56:49 +0000 (0:00:00.136) 0:00:22.699 ***** 2026-02-28 00:59:23.077195 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:23.077202 | orchestrator | 2026-02-28 00:59:23.077210 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-28 00:59:23.077217 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.804) 0:00:23.504 ***** 2026-02-28 00:59:23.077225 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:23.077232 | orchestrator | 2026-02-28 00:59:23.077240 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-28 00:59:23.077247 | orchestrator | Saturday 28 February 2026 00:56:50 +0000 (0:00:00.220) 0:00:23.724 ***** 2026-02-28 00:59:23.077256 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:23.077263 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:23.077271 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:23.077279 | orchestrator | 2026-02-28 00:59:23.077286 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-28 00:59:23.077294 | orchestrator | Saturday 28 February 2026 00:57:55 +0000 (0:01:04.622) 0:01:28.347 ***** 2026-02-28 00:59:23.077301 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:23.077308 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:23.077316 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:23.077323 | orchestrator | 2026-02-28 00:59:23.077331 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-28 00:59:23.077338 | orchestrator | Saturday 28 February 2026 00:59:06 +0000 (0:01:11.376) 0:02:39.723 ***** 2026-02-28 00:59:23.077346 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:23.077353 | orchestrator | 2026-02-28 00:59:23.077361 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-28 00:59:23.077368 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:00.748) 0:02:40.471 ***** 2026-02-28 00:59:23.077375 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:23.077383 | orchestrator | 2026-02-28 00:59:23.077390 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-02-28 00:59:23.077397 | orchestrator | Saturday 28 February 2026 00:59:10 +0000 (0:00:02.565) 0:02:43.037 ***** 2026-02-28 00:59:23.077405 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:23.077412 | orchestrator | 2026-02-28 00:59:23.077420 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-28 00:59:23.077428 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:02.322) 0:02:45.359 ***** 2026-02-28 00:59:23.077435 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:23.077443 | orchestrator | 2026-02-28 00:59:23.077450 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-28 00:59:23.077458 | orchestrator | Saturday 28 February 2026 00:59:15 +0000 (0:00:02.612) 0:02:47.972 ***** 2026-02-28 00:59:23.077466 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:23.077473 | orchestrator | 2026-02-28 00:59:23.077481 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-28 00:59:23.077488 | orchestrator | Saturday 28 February 2026 00:59:18 +0000 (0:00:03.077) 0:02:51.050 ***** 2026-02-28 00:59:23.077497 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:23.077504 | orchestrator | 2026-02-28 00:59:23.077512 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:59:23.077525 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 00:59:23.077538 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:59:23.077551 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-28 00:59:23.077559 | orchestrator | 2026-02-28 00:59:23.077567 | orchestrator | 2026-02-28 00:59:23.077575 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:59:23.077582 | orchestrator | Saturday 28 February 2026 00:59:20 +0000 (0:00:02.730) 0:02:53.780 ***** 2026-02-28 00:59:23.077589 | orchestrator | =============================================================================== 2026-02-28 00:59:23.077597 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.38s 2026-02-28 00:59:23.077604 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.62s 2026-02-28 00:59:23.077611 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.50s 2026-02-28 00:59:23.077677 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.08s 2026-02-28 00:59:23.077687 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.88s 2026-02-28 00:59:23.077695 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.79s 2026-02-28 00:59:23.077702 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.73s 2026-02-28 00:59:23.077710 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.61s 2026-02-28 00:59:23.077718 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.57s 2026-02-28 00:59:23.077725 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.41s 2026-02-28 00:59:23.077733 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.35s 2026-02-28 00:59:23.077741 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.32s 2026-02-28 00:59:23.077749 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.92s 2026-02-28 00:59:23.077757 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.79s 2026-02-28 00:59:23.077765 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.98s 2026-02-28 00:59:23.077773 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.82s 2026-02-28 00:59:23.077780 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.80s 2026-02-28 00:59:23.077788 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2026-02-28 00:59:23.077796 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-02-28 00:59:23.077804 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-02-28 00:59:26.116161 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:26.117428 | orchestrator | 2026-02-28 00:59:26 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:26.117456 | orchestrator | 2026-02-28 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:29.158197 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:29.160011 | orchestrator | 2026-02-28 00:59:29 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:29.160737 | orchestrator | 2026-02-28 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:32.207100 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:32.208845 | orchestrator | 2026-02-28 00:59:32 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:32.208918 | orchestrator | 2026-02-28 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:35.252655 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:35.256717 | orchestrator | 2026-02-28 00:59:35 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:35.256783 | orchestrator | 2026-02-28 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:38.303504 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:38.304113 | orchestrator | 2026-02-28 00:59:38 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:38.304152 | orchestrator | 2026-02-28 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:41.355071 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:41.357484 | orchestrator | 2026-02-28 00:59:41 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:41.357551 | orchestrator | 2026-02-28 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:44.402764 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:44.405297 | orchestrator | 2026-02-28 00:59:44 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state STARTED 2026-02-28 00:59:44.405349 | orchestrator | 2026-02-28 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:47.458463 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:47.460131 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 00:59:47.462114 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 00:59:47.464585 | orchestrator | 2026-02-28 00:59:47 | INFO  | Task 0dc8a114-e78b-4115-9869-23bf18294c77 is in state SUCCESS 2026-02-28 00:59:47.468394 | orchestrator | 2026-02-28 00:59:47.468431 | orchestrator | 2026-02-28 00:59:47.468437 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-28 00:59:47.468442 | orchestrator | 2026-02-28 00:59:47.468447 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-28 00:59:47.468453 | orchestrator | Saturday 28 February 2026 00:56:27 +0000 (0:00:00.122) 0:00:00.122 ***** 2026-02-28 00:59:47.468457 | orchestrator | ok: [localhost] => { 2026-02-28 00:59:47.468463 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-28 00:59:47.468468 | orchestrator | } 2026-02-28 00:59:47.468473 | orchestrator | 2026-02-28 00:59:47.468478 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-28 00:59:47.468482 | orchestrator | Saturday 28 February 2026 00:56:27 +0000 (0:00:00.059) 0:00:00.182 ***** 2026-02-28 00:59:47.468487 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-28 00:59:47.468493 | orchestrator | ...ignoring 2026-02-28 00:59:47.468497 | orchestrator | 2026-02-28 00:59:47.468502 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-28 00:59:47.468506 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:03.106) 0:00:03.289 ***** 2026-02-28 00:59:47.468533 | orchestrator | skipping: [localhost] 2026-02-28 00:59:47.468538 | orchestrator | 2026-02-28 00:59:47.468542 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-28 00:59:47.468546 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.057) 0:00:03.346 ***** 2026-02-28 00:59:47.468551 | orchestrator | ok: [localhost] 2026-02-28 00:59:47.468555 | orchestrator | 2026-02-28 00:59:47.468560 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 00:59:47.468564 | orchestrator | 2026-02-28 00:59:47.468568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 00:59:47.468573 | orchestrator | Saturday 28 February 2026 00:56:30 +0000 (0:00:00.255) 0:00:03.602 ***** 2026-02-28 00:59:47.468577 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.468581 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.468586 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.468590 | orchestrator | 2026-02-28 00:59:47.468594 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 00:59:47.468598 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:00.405) 0:00:04.008 ***** 2026-02-28 00:59:47.468603 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-28 00:59:47.468608 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-28 00:59:47.468612 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-28 00:59:47.468616 | orchestrator | 2026-02-28 00:59:47.468640 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-28 00:59:47.468645 | orchestrator | 2026-02-28 00:59:47.468649 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-28 00:59:47.468654 | orchestrator | Saturday 28 February 2026 00:56:31 +0000 (0:00:00.597) 0:00:04.605 ***** 2026-02-28 00:59:47.468658 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-28 00:59:47.468663 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-28 00:59:47.468667 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-28 00:59:47.468672 | orchestrator | 2026-02-28 00:59:47.468676 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:47.468680 | orchestrator | Saturday 28 February 2026 00:56:32 +0000 (0:00:00.525) 0:00:05.131 ***** 2026-02-28 00:59:47.468685 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:47.468690 | orchestrator | 2026-02-28 00:59:47.468695 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-28 00:59:47.468699 | orchestrator | Saturday 28 February 2026 00:56:33 +0000 (0:00:00.781) 0:00:05.912 ***** 2026-02-28 00:59:47.468727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.468740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.468749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.468758 | orchestrator | 2026-02-28 00:59:47.468765 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-28 00:59:47.468769 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:03.923) 0:00:09.836 ***** 2026-02-28 00:59:47.468774 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.468779 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.468783 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.468788 | orchestrator | 2026-02-28 00:59:47.468792 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-28 00:59:47.468796 | orchestrator | Saturday 28 February 2026 00:56:37 +0000 (0:00:00.822) 0:00:10.659 ***** 2026-02-28 00:59:47.468801 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.468805 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.468834 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.468838 | orchestrator | 2026-02-28 00:59:47.468843 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-28 00:59:47.468847 | orchestrator | Saturday 28 February 2026 00:56:39 +0000 (0:00:01.815) 0:00:12.474 ***** 2026-02-28 00:59:47.468852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.468864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.468873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.468878 | orchestrator | 2026-02-28 00:59:47.468882 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-28 00:59:47.468887 | orchestrator | Saturday 28 February 2026 00:56:44 +0000 (0:00:04.814) 0:00:17.289 ***** 2026-02-28 00:59:47.468891 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.468895 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.468900 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.468904 | orchestrator | 2026-02-28 00:59:47.468908 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-28 00:59:47.468913 | orchestrator | Saturday 28 February 2026 00:56:45 +0000 (0:00:01.295) 0:00:18.584 ***** 2026-02-28 00:59:47.468917 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.468961 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:47.468967 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:47.468971 | orchestrator | 2026-02-28 00:59:47.468975 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:47.468980 | orchestrator | Saturday 28 February 2026 00:56:51 +0000 (0:00:05.867) 0:00:24.451 ***** 2026-02-28 00:59:47.468985 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:47.468989 | orchestrator | 2026-02-28 00:59:47.468993 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-28 00:59:47.469002 | orchestrator | Saturday 28 February 2026 00:56:52 +0000 (0:00:01.032) 0:00:25.484 ***** 2026-02-28 00:59:47.469015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469033 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469055 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469060 | orchestrator | 2026-02-28 00:59:47.469150 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-28 00:59:47.469158 | orchestrator | Saturday 28 February 2026 00:56:56 +0000 (0:00:03.768) 0:00:29.253 ***** 2026-02-28 00:59:47.469163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469169 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469191 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469202 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469207 | orchestrator | 2026-02-28 00:59:47.469212 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-28 00:59:47.469217 | orchestrator | Saturday 28 February 2026 00:57:00 +0000 (0:00:03.745) 0:00:32.998 ***** 2026-02-28 00:59:47.469225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469235 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469250 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-28 00:59:47.469270 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469275 | orchestrator | 2026-02-28 00:59:47.469280 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-28 00:59:47.469285 | orchestrator | Saturday 28 February 2026 00:57:03 +0000 (0:00:03.530) 0:00:36.529 ***** 2026-02-28 00:59:47.469294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2026-02-28 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:47.469302 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.469310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.469325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-28 00:59:47.469331 | orchestrator | 2026-02-28 00:59:47.469336 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-28 00:59:47.469341 | orchestrator | Saturday 28 February 2026 00:57:07 +0000 (0:00:04.154) 0:00:40.683 ***** 2026-02-28 00:59:47.469346 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.469351 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:47.469356 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:47.469360 | orchestrator | 2026-02-28 00:59:47.469365 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-28 00:59:47.469370 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:01.090) 0:00:41.774 ***** 2026-02-28 00:59:47.469375 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469379 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.469387 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.469392 | orchestrator | 2026-02-28 00:59:47.469396 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-28 00:59:47.469401 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:00.427) 0:00:42.201 ***** 2026-02-28 00:59:47.469405 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469409 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.469414 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.469418 | orchestrator | 2026-02-28 00:59:47.469422 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-28 00:59:47.469427 | orchestrator | Saturday 28 February 2026 00:57:09 +0000 (0:00:00.468) 0:00:42.670 ***** 2026-02-28 00:59:47.469432 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-28 00:59:47.469436 | orchestrator | ...ignoring 2026-02-28 00:59:47.469441 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-28 00:59:47.469446 | orchestrator | ...ignoring 2026-02-28 00:59:47.469450 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-28 00:59:47.469454 | orchestrator | ...ignoring 2026-02-28 00:59:47.469459 | orchestrator | 2026-02-28 00:59:47.469463 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-28 00:59:47.469467 | orchestrator | Saturday 28 February 2026 00:57:20 +0000 (0:00:10.850) 0:00:53.520 ***** 2026-02-28 00:59:47.469472 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469476 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.469481 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.469485 | orchestrator | 2026-02-28 00:59:47.469489 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-28 00:59:47.469494 | orchestrator | Saturday 28 February 2026 00:57:21 +0000 (0:00:00.412) 0:00:53.932 ***** 2026-02-28 00:59:47.469498 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469502 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469507 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469511 | orchestrator | 2026-02-28 00:59:47.469516 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-28 00:59:47.469526 | orchestrator | Saturday 28 February 2026 00:57:21 +0000 (0:00:00.584) 0:00:54.516 ***** 2026-02-28 00:59:47.469533 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469540 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469547 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469553 | orchestrator | 2026-02-28 00:59:47.469559 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-28 00:59:47.469566 | orchestrator | Saturday 28 February 2026 00:57:22 +0000 (0:00:00.398) 0:00:54.914 ***** 2026-02-28 00:59:47.469573 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469580 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469586 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469593 | orchestrator | 2026-02-28 00:59:47.469600 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-28 00:59:47.469607 | orchestrator | Saturday 28 February 2026 00:57:22 +0000 (0:00:00.428) 0:00:55.343 ***** 2026-02-28 00:59:47.469614 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469636 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.469643 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.469651 | orchestrator | 2026-02-28 00:59:47.469658 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-28 00:59:47.469670 | orchestrator | Saturday 28 February 2026 00:57:23 +0000 (0:00:00.451) 0:00:55.794 ***** 2026-02-28 00:59:47.469677 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469684 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469690 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469700 | orchestrator | 2026-02-28 00:59:47.469705 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:47.469709 | orchestrator | Saturday 28 February 2026 00:57:23 +0000 (0:00:00.773) 0:00:56.568 ***** 2026-02-28 00:59:47.469713 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469718 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469722 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-28 00:59:47.469727 | orchestrator | 2026-02-28 00:59:47.469731 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-28 00:59:47.469735 | orchestrator | Saturday 28 February 2026 00:57:24 +0000 (0:00:00.422) 0:00:56.990 ***** 2026-02-28 00:59:47.469740 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.469744 | orchestrator | 2026-02-28 00:59:47.469748 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-28 00:59:47.469753 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:10.823) 0:01:07.814 ***** 2026-02-28 00:59:47.469757 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469761 | orchestrator | 2026-02-28 00:59:47.469766 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-28 00:59:47.469770 | orchestrator | Saturday 28 February 2026 00:57:35 +0000 (0:00:00.135) 0:01:07.949 ***** 2026-02-28 00:59:47.469774 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469778 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469783 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469787 | orchestrator | 2026-02-28 00:59:47.469791 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-28 00:59:47.469795 | orchestrator | Saturday 28 February 2026 00:57:36 +0000 (0:00:01.217) 0:01:09.167 ***** 2026-02-28 00:59:47.469800 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.469804 | orchestrator | 2026-02-28 00:59:47.469808 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-28 00:59:47.469812 | orchestrator | Saturday 28 February 2026 00:57:45 +0000 (0:00:08.852) 0:01:18.019 ***** 2026-02-28 00:59:47.469817 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469821 | orchestrator | 2026-02-28 00:59:47.469825 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-28 00:59:47.469830 | orchestrator | Saturday 28 February 2026 00:57:46 +0000 (0:00:01.615) 0:01:19.635 ***** 2026-02-28 00:59:47.469834 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.469838 | orchestrator | 2026-02-28 00:59:47.469843 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-28 00:59:47.469847 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:02.938) 0:01:22.573 ***** 2026-02-28 00:59:47.469851 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.469856 | orchestrator | 2026-02-28 00:59:47.469860 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-28 00:59:47.469864 | orchestrator | Saturday 28 February 2026 00:57:49 +0000 (0:00:00.138) 0:01:22.712 ***** 2026-02-28 00:59:47.469869 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469873 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.469877 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.469882 | orchestrator | 2026-02-28 00:59:47.469886 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-28 00:59:47.469890 | orchestrator | Saturday 28 February 2026 00:57:50 +0000 (0:00:00.420) 0:01:23.133 ***** 2026-02-28 00:59:47.469895 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.469899 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:47.469903 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:47.469908 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-28 00:59:47.469912 | orchestrator | 2026-02-28 00:59:47.469916 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-28 00:59:47.469921 | orchestrator | skipping: no hosts matched 2026-02-28 00:59:47.469929 | orchestrator | 2026-02-28 00:59:47.469933 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-28 00:59:47.469937 | orchestrator | 2026-02-28 00:59:47.469942 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 00:59:47.469946 | orchestrator | Saturday 28 February 2026 00:57:51 +0000 (0:00:00.673) 0:01:23.807 ***** 2026-02-28 00:59:47.469950 | orchestrator | changed: [testbed-node-1] 2026-02-28 00:59:47.469954 | orchestrator | 2026-02-28 00:59:47.469959 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 00:59:47.469963 | orchestrator | Saturday 28 February 2026 00:58:10 +0000 (0:00:19.563) 0:01:43.370 ***** 2026-02-28 00:59:47.469967 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.469972 | orchestrator | 2026-02-28 00:59:47.469976 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 00:59:47.469984 | orchestrator | Saturday 28 February 2026 00:58:26 +0000 (0:00:15.690) 0:01:59.060 ***** 2026-02-28 00:59:47.469997 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.470001 | orchestrator | 2026-02-28 00:59:47.470006 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-28 00:59:47.470010 | orchestrator | 2026-02-28 00:59:47.470042 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 00:59:47.470047 | orchestrator | Saturday 28 February 2026 00:58:29 +0000 (0:00:02.940) 0:02:02.001 ***** 2026-02-28 00:59:47.470051 | orchestrator | changed: [testbed-node-2] 2026-02-28 00:59:47.470055 | orchestrator | 2026-02-28 00:59:47.470060 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 00:59:47.470064 | orchestrator | Saturday 28 February 2026 00:58:49 +0000 (0:00:20.004) 0:02:22.006 ***** 2026-02-28 00:59:47.470068 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.470073 | orchestrator | 2026-02-28 00:59:47.470077 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 00:59:47.470081 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:15.595) 0:02:37.602 ***** 2026-02-28 00:59:47.470086 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.470090 | orchestrator | 2026-02-28 00:59:47.470097 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-28 00:59:47.470102 | orchestrator | 2026-02-28 00:59:47.470106 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-28 00:59:47.470110 | orchestrator | Saturday 28 February 2026 00:59:08 +0000 (0:00:03.915) 0:02:41.518 ***** 2026-02-28 00:59:47.470114 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.470119 | orchestrator | 2026-02-28 00:59:47.470123 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-28 00:59:47.470127 | orchestrator | Saturday 28 February 2026 00:59:27 +0000 (0:00:19.222) 0:03:00.740 ***** 2026-02-28 00:59:47.470132 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.470136 | orchestrator | 2026-02-28 00:59:47.470140 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-28 00:59:47.470145 | orchestrator | Saturday 28 February 2026 00:59:29 +0000 (0:00:01.063) 0:03:01.804 ***** 2026-02-28 00:59:47.470149 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.470153 | orchestrator | 2026-02-28 00:59:47.470158 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-28 00:59:47.470162 | orchestrator | 2026-02-28 00:59:47.470166 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-28 00:59:47.470171 | orchestrator | Saturday 28 February 2026 00:59:31 +0000 (0:00:02.574) 0:03:04.379 ***** 2026-02-28 00:59:47.470175 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 00:59:47.470179 | orchestrator | 2026-02-28 00:59:47.470183 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-28 00:59:47.470188 | orchestrator | Saturday 28 February 2026 00:59:32 +0000 (0:00:00.604) 0:03:04.984 ***** 2026-02-28 00:59:47.470192 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.470196 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.470205 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.470209 | orchestrator | 2026-02-28 00:59:47.470213 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-28 00:59:47.470218 | orchestrator | Saturday 28 February 2026 00:59:34 +0000 (0:00:02.467) 0:03:07.452 ***** 2026-02-28 00:59:47.470222 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.470226 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.470231 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.470235 | orchestrator | 2026-02-28 00:59:47.470239 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-28 00:59:47.470243 | orchestrator | Saturday 28 February 2026 00:59:36 +0000 (0:00:02.228) 0:03:09.680 ***** 2026-02-28 00:59:47.470248 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.470252 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.470256 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.470261 | orchestrator | 2026-02-28 00:59:47.470265 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-28 00:59:47.470269 | orchestrator | Saturday 28 February 2026 00:59:39 +0000 (0:00:02.356) 0:03:12.037 ***** 2026-02-28 00:59:47.470274 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.470278 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.470283 | orchestrator | changed: [testbed-node-0] 2026-02-28 00:59:47.470287 | orchestrator | 2026-02-28 00:59:47.470291 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-28 00:59:47.470295 | orchestrator | Saturday 28 February 2026 00:59:41 +0000 (0:00:02.325) 0:03:14.362 ***** 2026-02-28 00:59:47.470300 | orchestrator | ok: [testbed-node-1] 2026-02-28 00:59:47.470304 | orchestrator | ok: [testbed-node-0] 2026-02-28 00:59:47.470309 | orchestrator | ok: [testbed-node-2] 2026-02-28 00:59:47.470313 | orchestrator | 2026-02-28 00:59:47.470317 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-28 00:59:47.470321 | orchestrator | Saturday 28 February 2026 00:59:45 +0000 (0:00:03.405) 0:03:17.768 ***** 2026-02-28 00:59:47.470326 | orchestrator | skipping: [testbed-node-0] 2026-02-28 00:59:47.470330 | orchestrator | skipping: [testbed-node-1] 2026-02-28 00:59:47.470334 | orchestrator | skipping: [testbed-node-2] 2026-02-28 00:59:47.470339 | orchestrator | 2026-02-28 00:59:47.470343 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 00:59:47.470347 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-28 00:59:47.470353 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-28 00:59:47.470358 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-28 00:59:47.470366 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-28 00:59:47.470370 | orchestrator | 2026-02-28 00:59:47.470375 | orchestrator | 2026-02-28 00:59:47.470379 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 00:59:47.470383 | orchestrator | Saturday 28 February 2026 00:59:45 +0000 (0:00:00.266) 0:03:18.034 ***** 2026-02-28 00:59:47.470388 | orchestrator | =============================================================================== 2026-02-28 00:59:47.470392 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.57s 2026-02-28 00:59:47.470396 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.29s 2026-02-28 00:59:47.470401 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 19.22s 2026-02-28 00:59:47.470405 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.85s 2026-02-28 00:59:47.470414 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.82s 2026-02-28 00:59:47.470421 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.85s 2026-02-28 00:59:47.470426 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 6.86s 2026-02-28 00:59:47.470430 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.87s 2026-02-28 00:59:47.470434 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.81s 2026-02-28 00:59:47.470439 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.15s 2026-02-28 00:59:47.470443 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.92s 2026-02-28 00:59:47.470448 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.77s 2026-02-28 00:59:47.470452 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.75s 2026-02-28 00:59:47.470456 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.53s 2026-02-28 00:59:47.470461 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.41s 2026-02-28 00:59:47.470465 | orchestrator | Check MariaDB service --------------------------------------------------- 3.11s 2026-02-28 00:59:47.470470 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.94s 2026-02-28 00:59:47.470474 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.57s 2026-02-28 00:59:47.470478 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.47s 2026-02-28 00:59:47.470483 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.36s 2026-02-28 00:59:50.525249 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:50.527495 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 00:59:50.532237 | orchestrator | 2026-02-28 00:59:50 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 00:59:50.532298 | orchestrator | 2026-02-28 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:53.574382 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:53.574511 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 00:59:53.577104 | orchestrator | 2026-02-28 00:59:53 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 00:59:53.577162 | orchestrator | 2026-02-28 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:56.626597 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:56.628245 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 00:59:56.629960 | orchestrator | 2026-02-28 00:59:56 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 00:59:56.630119 | orchestrator | 2026-02-28 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 00:59:59.678481 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 00:59:59.678901 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 00:59:59.680089 | orchestrator | 2026-02-28 00:59:59 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 00:59:59.680113 | orchestrator | 2026-02-28 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:02.724675 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:02.726075 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:02.727454 | orchestrator | 2026-02-28 01:00:02 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:02.727680 | orchestrator | 2026-02-28 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:05.777213 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:05.778246 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:05.780284 | orchestrator | 2026-02-28 01:00:05 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:05.780314 | orchestrator | 2026-02-28 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:08.819503 | orchestrator | 2026-02-28 01:00:08 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:08.819895 | orchestrator | 2026-02-28 01:00:08 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:08.820998 | orchestrator | 2026-02-28 01:00:08 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:08.821048 | orchestrator | 2026-02-28 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:11.870610 | orchestrator | 2026-02-28 01:00:11 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:11.872150 | orchestrator | 2026-02-28 01:00:11 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:11.873164 | orchestrator | 2026-02-28 01:00:11 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:11.873193 | orchestrator | 2026-02-28 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:14.910896 | orchestrator | 2026-02-28 01:00:14 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:14.911453 | orchestrator | 2026-02-28 01:00:14 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:14.912440 | orchestrator | 2026-02-28 01:00:14 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:14.912475 | orchestrator | 2026-02-28 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:17.946835 | orchestrator | 2026-02-28 01:00:17 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:17.947668 | orchestrator | 2026-02-28 01:00:17 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:17.949969 | orchestrator | 2026-02-28 01:00:17 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:17.950061 | orchestrator | 2026-02-28 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:20.994958 | orchestrator | 2026-02-28 01:00:20 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:20.995510 | orchestrator | 2026-02-28 01:00:20 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:20.997589 | orchestrator | 2026-02-28 01:00:20 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:20.997641 | orchestrator | 2026-02-28 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:24.050564 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:24.050713 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:24.050727 | orchestrator | 2026-02-28 01:00:24 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:24.050735 | orchestrator | 2026-02-28 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:27.087917 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:27.088965 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:27.091678 | orchestrator | 2026-02-28 01:00:27 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:27.092826 | orchestrator | 2026-02-28 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:30.139013 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:30.144113 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:30.145064 | orchestrator | 2026-02-28 01:00:30 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:30.145128 | orchestrator | 2026-02-28 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:33.191297 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:33.193191 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:33.195069 | orchestrator | 2026-02-28 01:00:33 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:33.195422 | orchestrator | 2026-02-28 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:36.233866 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:36.237704 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:36.242184 | orchestrator | 2026-02-28 01:00:36 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:36.242271 | orchestrator | 2026-02-28 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:39.288874 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:39.290582 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:39.292799 | orchestrator | 2026-02-28 01:00:39 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:39.292846 | orchestrator | 2026-02-28 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:42.346528 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:42.348352 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:42.351162 | orchestrator | 2026-02-28 01:00:42 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:42.351290 | orchestrator | 2026-02-28 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:45.400716 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:45.401544 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:45.402868 | orchestrator | 2026-02-28 01:00:45 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:45.402921 | orchestrator | 2026-02-28 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:48.443438 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:48.445871 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:48.447202 | orchestrator | 2026-02-28 01:00:48 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:48.448661 | orchestrator | 2026-02-28 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:51.483114 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:51.484007 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:51.485455 | orchestrator | 2026-02-28 01:00:51 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:51.485931 | orchestrator | 2026-02-28 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:54.524283 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:54.526211 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:54.527183 | orchestrator | 2026-02-28 01:00:54 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:54.527238 | orchestrator | 2026-02-28 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:00:57.566749 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:00:57.568409 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:00:57.571514 | orchestrator | 2026-02-28 01:00:57 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:00:57.571595 | orchestrator | 2026-02-28 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:00.612381 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:01:00.613023 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:00.614258 | orchestrator | 2026-02-28 01:01:00 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:00.614302 | orchestrator | 2026-02-28 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:03.671525 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state STARTED 2026-02-28 01:01:03.672942 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:03.674080 | orchestrator | 2026-02-28 01:01:03 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:03.674338 | orchestrator | 2026-02-28 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:06.723063 | orchestrator | 2026-02-28 01:01:06 | INFO  | Task bd2acfbd-33cf-430a-9136-aee9001bc1cb is in state SUCCESS 2026-02-28 01:01:06.725563 | orchestrator | 2026-02-28 01:01:06.725622 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 01:01:06.725657 | orchestrator | 2.16.14 2026-02-28 01:01:06.725683 | orchestrator | 2026-02-28 01:01:06.725688 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-28 01:01:06.725692 | orchestrator | 2026-02-28 01:01:06.725696 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-28 01:01:06.725701 | orchestrator | Saturday 28 February 2026 00:58:50 +0000 (0:00:00.740) 0:00:00.740 ***** 2026-02-28 01:01:06.725781 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:01:06.725787 | orchestrator | 2026-02-28 01:01:06.725791 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-28 01:01:06.725795 | orchestrator | Saturday 28 February 2026 00:58:51 +0000 (0:00:00.706) 0:00:01.447 ***** 2026-02-28 01:01:06.725799 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.725803 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.725841 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.725846 | orchestrator | 2026-02-28 01:01:06.725850 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-28 01:01:06.725854 | orchestrator | Saturday 28 February 2026 00:58:52 +0000 (0:00:00.656) 0:00:02.104 ***** 2026-02-28 01:01:06.725896 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.725901 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.725906 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.725910 | orchestrator | 2026-02-28 01:01:06.725914 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-28 01:01:06.725918 | orchestrator | Saturday 28 February 2026 00:58:52 +0000 (0:00:00.319) 0:00:02.424 ***** 2026-02-28 01:01:06.725921 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.726172 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.726183 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.726189 | orchestrator | 2026-02-28 01:01:06.726196 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-28 01:01:06.726230 | orchestrator | Saturday 28 February 2026 00:58:53 +0000 (0:00:00.941) 0:00:03.365 ***** 2026-02-28 01:01:06.726238 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.726244 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.726250 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.726256 | orchestrator | 2026-02-28 01:01:06.726263 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-28 01:01:06.726269 | orchestrator | Saturday 28 February 2026 00:58:53 +0000 (0:00:00.343) 0:00:03.709 ***** 2026-02-28 01:01:06.726276 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.726284 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.726291 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.726297 | orchestrator | 2026-02-28 01:01:06.726303 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-28 01:01:06.726311 | orchestrator | Saturday 28 February 2026 00:58:53 +0000 (0:00:00.321) 0:00:04.030 ***** 2026-02-28 01:01:06.726317 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.726323 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.726329 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.726336 | orchestrator | 2026-02-28 01:01:06.726343 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-28 01:01:06.726532 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:00.350) 0:00:04.380 ***** 2026-02-28 01:01:06.726546 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.726553 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.726559 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.726565 | orchestrator | 2026-02-28 01:01:06.726572 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-28 01:01:06.726578 | orchestrator | Saturday 28 February 2026 00:58:54 +0000 (0:00:00.562) 0:00:04.943 ***** 2026-02-28 01:01:06.726584 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.726590 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.726596 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.726602 | orchestrator | 2026-02-28 01:01:06.726609 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-28 01:01:06.726668 | orchestrator | Saturday 28 February 2026 00:58:55 +0000 (0:00:00.320) 0:00:05.263 ***** 2026-02-28 01:01:06.726677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:06.726683 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:06.726703 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:06.726707 | orchestrator | 2026-02-28 01:01:06.726711 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-28 01:01:06.726715 | orchestrator | Saturday 28 February 2026 00:58:55 +0000 (0:00:00.717) 0:00:05.981 ***** 2026-02-28 01:01:06.726719 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.726722 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.726726 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.726730 | orchestrator | 2026-02-28 01:01:06.726734 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-28 01:01:06.726738 | orchestrator | Saturday 28 February 2026 00:58:56 +0000 (0:00:00.459) 0:00:06.441 ***** 2026-02-28 01:01:06.726742 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:06.726746 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:06.726749 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:06.726755 | orchestrator | 2026-02-28 01:01:06.726762 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-28 01:01:06.726767 | orchestrator | Saturday 28 February 2026 00:58:58 +0000 (0:00:02.280) 0:00:08.721 ***** 2026-02-28 01:01:06.726774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 01:01:06.726780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 01:01:06.726787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 01:01:06.726793 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.726799 | orchestrator | 2026-02-28 01:01:06.726842 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-28 01:01:06.726850 | orchestrator | Saturday 28 February 2026 00:58:59 +0000 (0:00:00.712) 0:00:09.433 ***** 2026-02-28 01:01:06.726857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.726867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.726874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.726880 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.726887 | orchestrator | 2026-02-28 01:01:06.726894 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-28 01:01:06.726900 | orchestrator | Saturday 28 February 2026 00:59:00 +0000 (0:00:00.925) 0:00:10.359 ***** 2026-02-28 01:01:06.726909 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.726918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.726936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.726943 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.726950 | orchestrator | 2026-02-28 01:01:06.726957 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-28 01:01:06.726964 | orchestrator | Saturday 28 February 2026 00:59:00 +0000 (0:00:00.452) 0:00:10.811 ***** 2026-02-28 01:01:06.726979 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e84970512b39', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-28 00:58:57.071837', 'end': '2026-02-28 00:58:57.119404', 'delta': '0:00:00.047567', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e84970512b39'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-28 01:01:06.726990 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '11e58c0cbf96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-28 00:58:57.874126', 'end': '2026-02-28 00:58:57.917271', 'delta': '0:00:00.043145', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['11e58c0cbf96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-28 01:01:06.727020 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '750a36a10c63', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-28 00:58:58.445308', 'end': '2026-02-28 00:58:58.479025', 'delta': '0:00:00.033717', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['750a36a10c63'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-28 01:01:06.727028 | orchestrator | 2026-02-28 01:01:06.727035 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-28 01:01:06.727043 | orchestrator | Saturday 28 February 2026 00:59:00 +0000 (0:00:00.230) 0:00:11.042 ***** 2026-02-28 01:01:06.727051 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.727060 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.727066 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.727072 | orchestrator | 2026-02-28 01:01:06.727080 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-28 01:01:06.727086 | orchestrator | Saturday 28 February 2026 00:59:01 +0000 (0:00:00.517) 0:00:11.559 ***** 2026-02-28 01:01:06.727100 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-28 01:01:06.727108 | orchestrator | 2026-02-28 01:01:06.727113 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-28 01:01:06.727120 | orchestrator | Saturday 28 February 2026 00:59:03 +0000 (0:00:01.768) 0:00:13.328 ***** 2026-02-28 01:01:06.727125 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727131 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727138 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727151 | orchestrator | 2026-02-28 01:01:06.727159 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-28 01:01:06.727166 | orchestrator | Saturday 28 February 2026 00:59:03 +0000 (0:00:00.342) 0:00:13.671 ***** 2026-02-28 01:01:06.727173 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727180 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727187 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727194 | orchestrator | 2026-02-28 01:01:06.727202 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 01:01:06.727209 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.465) 0:00:14.136 ***** 2026-02-28 01:01:06.727216 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727223 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727230 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727236 | orchestrator | 2026-02-28 01:01:06.727242 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-28 01:01:06.727249 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.556) 0:00:14.693 ***** 2026-02-28 01:01:06.727255 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.727261 | orchestrator | 2026-02-28 01:01:06.727267 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-28 01:01:06.727274 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.128) 0:00:14.822 ***** 2026-02-28 01:01:06.727281 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727288 | orchestrator | 2026-02-28 01:01:06.727294 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-28 01:01:06.727301 | orchestrator | Saturday 28 February 2026 00:59:04 +0000 (0:00:00.259) 0:00:15.081 ***** 2026-02-28 01:01:06.727309 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727315 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727322 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727328 | orchestrator | 2026-02-28 01:01:06.727335 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-28 01:01:06.727343 | orchestrator | Saturday 28 February 2026 00:59:05 +0000 (0:00:00.340) 0:00:15.421 ***** 2026-02-28 01:01:06.727350 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727363 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727371 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727378 | orchestrator | 2026-02-28 01:01:06.727386 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-28 01:01:06.727392 | orchestrator | Saturday 28 February 2026 00:59:05 +0000 (0:00:00.374) 0:00:15.796 ***** 2026-02-28 01:01:06.727400 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727407 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727413 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727420 | orchestrator | 2026-02-28 01:01:06.727427 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-28 01:01:06.727434 | orchestrator | Saturday 28 February 2026 00:59:06 +0000 (0:00:00.599) 0:00:16.395 ***** 2026-02-28 01:01:06.727441 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727448 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727454 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727461 | orchestrator | 2026-02-28 01:01:06.727467 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-28 01:01:06.727481 | orchestrator | Saturday 28 February 2026 00:59:06 +0000 (0:00:00.346) 0:00:16.742 ***** 2026-02-28 01:01:06.727487 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727494 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727500 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727506 | orchestrator | 2026-02-28 01:01:06.727512 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-28 01:01:06.727519 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:00.383) 0:00:17.125 ***** 2026-02-28 01:01:06.727523 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727527 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727531 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727559 | orchestrator | 2026-02-28 01:01:06.727564 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-28 01:01:06.727570 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:00.346) 0:00:17.471 ***** 2026-02-28 01:01:06.727576 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727582 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727588 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.727594 | orchestrator | 2026-02-28 01:01:06.727601 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-28 01:01:06.727608 | orchestrator | Saturday 28 February 2026 00:59:07 +0000 (0:00:00.559) 0:00:18.031 ***** 2026-02-28 01:01:06.727616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2', 'dm-uuid-LVM-LhuaNiqb1aaE0rrIXkJmdId6DTmzxYz3XAcZ1m8S7wRs0cGLbhdKMSdJMJpGp7FH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79', 'dm-uuid-LVM-nco1HNB6DfIt66XyU5t0An12V8JIhY08K5rxDgsWq69tTojbp5MQly90yZNx9PcR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e', 'dm-uuid-LVM-oEvVqsETkumcFmfvX36Aswue9YtL0Ei3ctP892bqoVgrwRbVQy3lHoCDaUo4Po0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LVx0r5-4jjO-slpB-U2Vw-w1fq-rRDr-0rBMlv', 'scsi-0QEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185', 'scsi-SQEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00', 'dm-uuid-LVM-NH1qV3EAURygPY7zLz8kOuc6LxLritDoFagI2LhLVeBWG1aSAHzbjJjK5kEppla2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kd3Qnk-expA-zOCH-MYLJ-G11h-yid9-r3LwJO', 'scsi-0QEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102', 'scsi-SQEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9', 'scsi-SQEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.727919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DlmcX-vyA1-FdZ4-rgBO-p0T7-jRuf-2G0Fm4', 'scsi-0QEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d', 'scsi-SQEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0lv7fw-TjZi-LVNE-0ofO-4ikh-qx6U-rJVolm', 'scsi-0QEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd', 'scsi-SQEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727949 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.727956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0', 'scsi-SQEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.727979 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.727986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8', 'dm-uuid-LVM-XCQn1NXuiFygAu0FMb3HnncWfDliS40aFj9Jw2XHuSeYn6DkfwfnLsqCS3stU1fW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10', 'dm-uuid-LVM-BqbQ0yf6eu0XC1OVoMqEm5OgBM88FmsT5sbJbDh3Pd1We2bx9OYSm5g8PLfSa9mW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-28 01:01:06.728076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.728084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDF6Ml-BMxH-QAHp-FH9m-xT4N-KaRX-rJAGxo', 'scsi-0QEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660', 'scsi-SQEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.728100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWaxZe-zQDk-vASa-bRYd-KRho-lQym-x8ZyHi', 'scsi-0QEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14', 'scsi-SQEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.728108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0', 'scsi-SQEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.728120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-28 01:01:06.728126 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.728133 | orchestrator | 2026-02-28 01:01:06.728139 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-28 01:01:06.728146 | orchestrator | Saturday 28 February 2026 00:59:08 +0000 (0:00:00.703) 0:00:18.734 ***** 2026-02-28 01:01:06.728154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2', 'dm-uuid-LVM-LhuaNiqb1aaE0rrIXkJmdId6DTmzxYz3XAcZ1m8S7wRs0cGLbhdKMSdJMJpGp7FH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79', 'dm-uuid-LVM-nco1HNB6DfIt66XyU5t0An12V8JIhY08K5rxDgsWq69tTojbp5MQly90yZNx9PcR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_31d0474f-d148-4fa7-8e21-0caa01fecd6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e2365387--977d--5b6c--ac86--7516065bddb2-osd--block--e2365387--977d--5b6c--ac86--7516065bddb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LVx0r5-4jjO-slpB-U2Vw-w1fq-rRDr-0rBMlv', 'scsi-0QEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185', 'scsi-SQEMU_QEMU_HARDDISK_5ed1e25e-e858-43bf-b647-15f2d5789185'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e', 'dm-uuid-LVM-oEvVqsETkumcFmfvX36Aswue9YtL0Ei3ctP892bqoVgrwRbVQy3lHoCDaUo4Po0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c221fe87--4514--5691--85ae--4cf2e32a6a79-osd--block--c221fe87--4514--5691--85ae--4cf2e32a6a79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kd3Qnk-expA-zOCH-MYLJ-G11h-yid9-r3LwJO', 'scsi-0QEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102', 'scsi-SQEMU_QEMU_HARDDISK_e0efcc73-9d13-408e-8d84-f67f704dc102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00', 'dm-uuid-LVM-NH1qV3EAURygPY7zLz8kOuc6LxLritDoFagI2LhLVeBWG1aSAHzbjJjK5kEppla2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9', 'scsi-SQEMU_QEMU_HARDDISK_dee7b1c7-019b-4aff-807a-ca0205e3afa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728373 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728377 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.728386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728413 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728417 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8', 'dm-uuid-LVM-XCQn1NXuiFygAu0FMb3HnncWfDliS40aFj9Jw2XHuSeYn6DkfwfnLsqCS3stU1fW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10', 'dm-uuid-LVM-BqbQ0yf6eu0XC1OVoMqEm5OgBM88FmsT5sbJbDh3Pd1We2bx9OYSm5g8PLfSa9mW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728445 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16', 'scsi-SQEMU_QEMU_HARDDISK_762d8a21-5374-4a80-ba11-21b5200d3acc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e-osd--block--4eb2c6f9--5e6f--5ebf--87cf--ca4fabb96f6e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0DlmcX-vyA1-FdZ4-rgBO-p0T7-jRuf-2G0Fm4', 'scsi-0QEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d', 'scsi-SQEMU_QEMU_HARDDISK_38dbb877-01c9-4d16-8c09-dabf832ed02d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4d8e79be--6c7a--5031--8b8d--1755de447a00-osd--block--4d8e79be--6c7a--5031--8b8d--1755de447a00'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0lv7fw-TjZi-LVNE-0ofO-4ikh-qx6U-rJVolm', 'scsi-0QEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd', 'scsi-SQEMU_QEMU_HARDDISK_6e2886f6-2d16-4655-86a5-4832cbb6b1fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0', 'scsi-SQEMU_QEMU_HARDDISK_4f48d2b3-724f-4801-86d9-3346f8b02ca0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728534 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728538 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.728542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b55e47-7594-497c-8e74-39b6ba356462-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4e9a8b5b--9130--5945--a817--2135e2f57de8-osd--block--4e9a8b5b--9130--5945--a817--2135e2f57de8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDF6Ml-BMxH-QAHp-FH9m-xT4N-KaRX-rJAGxo', 'scsi-0QEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660', 'scsi-SQEMU_QEMU_HARDDISK_d23b355c-9115-4a32-83d9-d27c9bfa2660'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--160cc444--1ede--5c9f--8076--16a146e97f10-osd--block--160cc444--1ede--5c9f--8076--16a146e97f10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWaxZe-zQDk-vASa-bRYd-KRho-lQym-x8ZyHi', 'scsi-0QEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14', 'scsi-SQEMU_QEMU_HARDDISK_2f6d4770-6b80-415d-bad7-939321dd0d14'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0', 'scsi-SQEMU_QEMU_HARDDISK_74c1e4c5-3021-4968-89ab-b5ccd24df7f0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-28-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-28 01:01:06.728585 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.728589 | orchestrator | 2026-02-28 01:01:06.728595 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-28 01:01:06.728601 | orchestrator | Saturday 28 February 2026 00:59:09 +0000 (0:00:00.692) 0:00:19.427 ***** 2026-02-28 01:01:06.728607 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.728614 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.728620 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.728673 | orchestrator | 2026-02-28 01:01:06.728682 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-28 01:01:06.728687 | orchestrator | Saturday 28 February 2026 00:59:10 +0000 (0:00:00.726) 0:00:20.154 ***** 2026-02-28 01:01:06.728693 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.728699 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.728705 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.728712 | orchestrator | 2026-02-28 01:01:06.728718 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 01:01:06.728724 | orchestrator | Saturday 28 February 2026 00:59:10 +0000 (0:00:00.617) 0:00:20.772 ***** 2026-02-28 01:01:06.728730 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.728735 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.728741 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.728747 | orchestrator | 2026-02-28 01:01:06.728754 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 01:01:06.728760 | orchestrator | Saturday 28 February 2026 00:59:11 +0000 (0:00:00.712) 0:00:21.484 ***** 2026-02-28 01:01:06.728766 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.728772 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.728779 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.728785 | orchestrator | 2026-02-28 01:01:06.728791 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-28 01:01:06.728798 | orchestrator | Saturday 28 February 2026 00:59:11 +0000 (0:00:00.336) 0:00:21.820 ***** 2026-02-28 01:01:06.728804 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.728810 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.728816 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.728823 | orchestrator | 2026-02-28 01:01:06.728829 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-28 01:01:06.728836 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:00.441) 0:00:22.262 ***** 2026-02-28 01:01:06.728842 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.728848 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.728854 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.728860 | orchestrator | 2026-02-28 01:01:06.728866 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-28 01:01:06.728872 | orchestrator | Saturday 28 February 2026 00:59:12 +0000 (0:00:00.626) 0:00:22.888 ***** 2026-02-28 01:01:06.728878 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-28 01:01:06.728884 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-28 01:01:06.728890 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-28 01:01:06.728896 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-28 01:01:06.728902 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-28 01:01:06.728908 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-28 01:01:06.728915 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-28 01:01:06.728921 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-28 01:01:06.728927 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-28 01:01:06.728934 | orchestrator | 2026-02-28 01:01:06.728940 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-28 01:01:06.728946 | orchestrator | Saturday 28 February 2026 00:59:13 +0000 (0:00:00.913) 0:00:23.802 ***** 2026-02-28 01:01:06.728966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-28 01:01:06.728973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-28 01:01:06.728979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-28 01:01:06.728985 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.728992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-28 01:01:06.728999 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-28 01:01:06.729005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-28 01:01:06.729011 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.729018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-28 01:01:06.729025 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-28 01:01:06.729031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-28 01:01:06.729038 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.729044 | orchestrator | 2026-02-28 01:01:06.729052 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-28 01:01:06.729059 | orchestrator | Saturday 28 February 2026 00:59:14 +0000 (0:00:00.419) 0:00:24.221 ***** 2026-02-28 01:01:06.729066 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:01:06.729073 | orchestrator | 2026-02-28 01:01:06.729081 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-28 01:01:06.729089 | orchestrator | Saturday 28 February 2026 00:59:14 +0000 (0:00:00.846) 0:00:25.068 ***** 2026-02-28 01:01:06.729105 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729113 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.729120 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.729127 | orchestrator | 2026-02-28 01:01:06.729133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-28 01:01:06.729140 | orchestrator | Saturday 28 February 2026 00:59:15 +0000 (0:00:00.414) 0:00:25.483 ***** 2026-02-28 01:01:06.729147 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729154 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.729160 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.729166 | orchestrator | 2026-02-28 01:01:06.729173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-28 01:01:06.729180 | orchestrator | Saturday 28 February 2026 00:59:15 +0000 (0:00:00.386) 0:00:25.869 ***** 2026-02-28 01:01:06.729186 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729190 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.729194 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:01:06.729198 | orchestrator | 2026-02-28 01:01:06.729202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-28 01:01:06.729206 | orchestrator | Saturday 28 February 2026 00:59:16 +0000 (0:00:00.367) 0:00:26.236 ***** 2026-02-28 01:01:06.729210 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.729214 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.729217 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.729221 | orchestrator | 2026-02-28 01:01:06.729225 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-28 01:01:06.729229 | orchestrator | Saturday 28 February 2026 00:59:16 +0000 (0:00:00.725) 0:00:26.962 ***** 2026-02-28 01:01:06.729233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:01:06.729237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:01:06.729240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:01:06.729244 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729248 | orchestrator | 2026-02-28 01:01:06.729254 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-28 01:01:06.729260 | orchestrator | Saturday 28 February 2026 00:59:17 +0000 (0:00:00.434) 0:00:27.396 ***** 2026-02-28 01:01:06.729274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:01:06.729281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:01:06.729287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:01:06.729293 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729300 | orchestrator | 2026-02-28 01:01:06.729306 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-28 01:01:06.729313 | orchestrator | Saturday 28 February 2026 00:59:17 +0000 (0:00:00.438) 0:00:27.835 ***** 2026-02-28 01:01:06.729319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-28 01:01:06.729326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-28 01:01:06.729332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-28 01:01:06.729338 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729344 | orchestrator | 2026-02-28 01:01:06.729351 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-28 01:01:06.729356 | orchestrator | Saturday 28 February 2026 00:59:18 +0000 (0:00:00.430) 0:00:28.265 ***** 2026-02-28 01:01:06.729360 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:01:06.729364 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:01:06.729368 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:01:06.729372 | orchestrator | 2026-02-28 01:01:06.729375 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-28 01:01:06.729379 | orchestrator | Saturday 28 February 2026 00:59:18 +0000 (0:00:00.390) 0:00:28.656 ***** 2026-02-28 01:01:06.729383 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-28 01:01:06.729387 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-28 01:01:06.729391 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-28 01:01:06.729395 | orchestrator | 2026-02-28 01:01:06.729399 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-28 01:01:06.729403 | orchestrator | Saturday 28 February 2026 00:59:19 +0000 (0:00:00.619) 0:00:29.276 ***** 2026-02-28 01:01:06.729407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:06.729417 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:06.729421 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:06.729425 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:01:06.729429 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:01:06.729433 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:01:06.729437 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 01:01:06.729441 | orchestrator | 2026-02-28 01:01:06.729445 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-28 01:01:06.729448 | orchestrator | Saturday 28 February 2026 00:59:20 +0000 (0:00:01.226) 0:00:30.503 ***** 2026-02-28 01:01:06.729452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-28 01:01:06.729456 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-28 01:01:06.729460 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-28 01:01:06.729464 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-28 01:01:06.729467 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-28 01:01:06.729471 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-28 01:01:06.729480 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-28 01:01:06.729484 | orchestrator | 2026-02-28 01:01:06.729488 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-28 01:01:06.729495 | orchestrator | Saturday 28 February 2026 00:59:22 +0000 (0:00:02.352) 0:00:32.855 ***** 2026-02-28 01:01:06.729499 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:01:06.729503 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:01:06.729507 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-28 01:01:06.729511 | orchestrator | 2026-02-28 01:01:06.729515 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-28 01:01:06.729519 | orchestrator | Saturday 28 February 2026 00:59:23 +0000 (0:00:00.444) 0:00:33.300 ***** 2026-02-28 01:01:06.729524 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:06.729529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:06.729533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:06.729537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:06.729543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-28 01:01:06.729549 | orchestrator | 2026-02-28 01:01:06.729556 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-28 01:01:06.729563 | orchestrator | Saturday 28 February 2026 01:00:08 +0000 (0:00:45.270) 0:01:18.571 ***** 2026-02-28 01:01:06.729569 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729591 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729597 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729604 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729611 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-28 01:01:06.729618 | orchestrator | 2026-02-28 01:01:06.729641 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-28 01:01:06.729653 | orchestrator | Saturday 28 February 2026 01:00:33 +0000 (0:00:25.234) 0:01:43.806 ***** 2026-02-28 01:01:06.729659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729665 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729672 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729685 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729696 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729702 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-28 01:01:06.729708 | orchestrator | 2026-02-28 01:01:06.729714 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-28 01:01:06.729720 | orchestrator | Saturday 28 February 2026 01:00:46 +0000 (0:00:12.785) 0:01:56.592 ***** 2026-02-28 01:01:06.729727 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729733 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:06.729740 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:06.729745 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729752 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:06.729765 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:06.729772 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729779 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:06.729786 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:06.729792 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729798 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:06.729804 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:06.729810 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729816 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:06.729822 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:06.729828 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-28 01:01:06.729834 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-28 01:01:06.729840 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-28 01:01:06.729846 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-28 01:01:06.729852 | orchestrator | 2026-02-28 01:01:06.729855 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:01:06.729860 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-28 01:01:06.729865 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-28 01:01:06.729869 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 01:01:06.729873 | orchestrator | 2026-02-28 01:01:06.729877 | orchestrator | 2026-02-28 01:01:06.729881 | orchestrator | 2026-02-28 01:01:06.729884 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:01:06.729888 | orchestrator | Saturday 28 February 2026 01:01:05 +0000 (0:00:18.643) 0:02:15.235 ***** 2026-02-28 01:01:06.729892 | orchestrator | =============================================================================== 2026-02-28 01:01:06.729896 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.27s 2026-02-28 01:01:06.729900 | orchestrator | generate keys ---------------------------------------------------------- 25.23s 2026-02-28 01:01:06.729903 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.64s 2026-02-28 01:01:06.729913 | orchestrator | get keys from monitors ------------------------------------------------- 12.79s 2026-02-28 01:01:06.729917 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.35s 2026-02-28 01:01:06.729920 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.28s 2026-02-28 01:01:06.729924 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2026-02-28 01:01:06.729928 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.23s 2026-02-28 01:01:06.729932 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.94s 2026-02-28 01:01:06.729935 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.93s 2026-02-28 01:01:06.729939 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.91s 2026-02-28 01:01:06.729943 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.85s 2026-02-28 01:01:06.729951 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-02-28 01:01:06.729955 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.73s 2026-02-28 01:01:06.729959 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2026-02-28 01:01:06.729962 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.71s 2026-02-28 01:01:06.729966 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-02-28 01:01:06.729970 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2026-02-28 01:01:06.729974 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.70s 2026-02-28 01:01:06.729977 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.69s 2026-02-28 01:01:06.729981 | orchestrator | 2026-02-28 01:01:06 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:06.730863 | orchestrator | 2026-02-28 01:01:06 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:06.730887 | orchestrator | 2026-02-28 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:09.782935 | orchestrator | 2026-02-28 01:01:09 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:09.785601 | orchestrator | 2026-02-28 01:01:09 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:09.786944 | orchestrator | 2026-02-28 01:01:09 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:09.787083 | orchestrator | 2026-02-28 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:12.837467 | orchestrator | 2026-02-28 01:01:12 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:12.840164 | orchestrator | 2026-02-28 01:01:12 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:12.841750 | orchestrator | 2026-02-28 01:01:12 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:12.842114 | orchestrator | 2026-02-28 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:15.877307 | orchestrator | 2026-02-28 01:01:15 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:15.878889 | orchestrator | 2026-02-28 01:01:15 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:15.881208 | orchestrator | 2026-02-28 01:01:15 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:15.881234 | orchestrator | 2026-02-28 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:18.924121 | orchestrator | 2026-02-28 01:01:18 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:18.924208 | orchestrator | 2026-02-28 01:01:18 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:18.925678 | orchestrator | 2026-02-28 01:01:18 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:18.925723 | orchestrator | 2026-02-28 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:21.980802 | orchestrator | 2026-02-28 01:01:21 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:21.981465 | orchestrator | 2026-02-28 01:01:21 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:21.983562 | orchestrator | 2026-02-28 01:01:21 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:21.983612 | orchestrator | 2026-02-28 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:25.046917 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:25.048058 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:25.049701 | orchestrator | 2026-02-28 01:01:25 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:25.049747 | orchestrator | 2026-02-28 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:28.098365 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:28.100235 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:28.102130 | orchestrator | 2026-02-28 01:01:28 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:28.102206 | orchestrator | 2026-02-28 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:31.140215 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:31.141368 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:31.143557 | orchestrator | 2026-02-28 01:01:31 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:31.143599 | orchestrator | 2026-02-28 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:34.194851 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:34.196878 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:34.198612 | orchestrator | 2026-02-28 01:01:34 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:34.198842 | orchestrator | 2026-02-28 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:37.249145 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:37.251442 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state STARTED 2026-02-28 01:01:37.253164 | orchestrator | 2026-02-28 01:01:37 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:37.253201 | orchestrator | 2026-02-28 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:40.305086 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:40.311116 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task 7a8b87b6-caa6-46a3-a995-a480f00372c2 is in state SUCCESS 2026-02-28 01:01:40.312732 | orchestrator | 2026-02-28 01:01:40.312764 | orchestrator | 2026-02-28 01:01:40.312775 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:01:40.312785 | orchestrator | 2026-02-28 01:01:40.312795 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:01:40.312805 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:00.323) 0:00:00.323 ***** 2026-02-28 01:01:40.312814 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.312825 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.312834 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.312842 | orchestrator | 2026-02-28 01:01:40.312852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:01:40.312862 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:00.326) 0:00:00.649 ***** 2026-02-28 01:01:40.312868 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-28 01:01:40.312875 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-28 01:01:40.312881 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-28 01:01:40.312886 | orchestrator | 2026-02-28 01:01:40.312892 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-28 01:01:40.312897 | orchestrator | 2026-02-28 01:01:40.312903 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:40.312909 | orchestrator | Saturday 28 February 2026 00:59:51 +0000 (0:00:00.500) 0:00:01.150 ***** 2026-02-28 01:01:40.312915 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:40.312921 | orchestrator | 2026-02-28 01:01:40.312927 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-28 01:01:40.312932 | orchestrator | Saturday 28 February 2026 00:59:51 +0000 (0:00:00.572) 0:00:01.723 ***** 2026-02-28 01:01:40.312958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.312990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.313002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.313014 | orchestrator | 2026-02-28 01:01:40.313019 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-28 01:01:40.313025 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:01.211) 0:00:02.935 ***** 2026-02-28 01:01:40.313031 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.313036 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.313042 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.313047 | orchestrator | 2026-02-28 01:01:40.313053 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:40.313058 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:00.538) 0:00:03.473 ***** 2026-02-28 01:01:40.313064 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:01:40.313073 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:01:40.313079 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:01:40.313085 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:01:40.313090 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:01:40.313096 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:01:40.313101 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:01:40.313107 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:01:40.313112 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:01:40.313118 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:01:40.313123 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:01:40.313129 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:01:40.313134 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:01:40.313140 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:01:40.313146 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:01:40.313151 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-28 01:01:40.313157 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:01:40.313162 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-28 01:01:40.313168 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-28 01:01:40.313173 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-28 01:01:40.313179 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-28 01:01:40.313184 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-28 01:01:40.313190 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-28 01:01:40.313195 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-28 01:01:40.313203 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-28 01:01:40.313210 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-28 01:01:40.313262 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-28 01:01:40.313301 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-28 01:01:40.313463 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-28 01:01:40.313473 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-28 01:01:40.313480 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-28 01:01:40.313487 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-28 01:01:40.313494 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-28 01:01:40.313501 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-28 01:01:40.313508 | orchestrator | 2026-02-28 01:01:40.313515 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.313523 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.835) 0:00:04.309 ***** 2026-02-28 01:01:40.313529 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.313536 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.313543 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.313550 | orchestrator | 2026-02-28 01:01:40.313557 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.313563 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.343) 0:00:04.653 ***** 2026-02-28 01:01:40.313569 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313575 | orchestrator | 2026-02-28 01:01:40.313586 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.313593 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.174) 0:00:04.827 ***** 2026-02-28 01:01:40.313599 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313605 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.313611 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.313617 | orchestrator | 2026-02-28 01:01:40.313623 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.313651 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.559) 0:00:05.387 ***** 2026-02-28 01:01:40.313662 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.313668 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.313674 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.313680 | orchestrator | 2026-02-28 01:01:40.313686 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.313692 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.337) 0:00:05.724 ***** 2026-02-28 01:01:40.313697 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313703 | orchestrator | 2026-02-28 01:01:40.313709 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.313715 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.159) 0:00:05.884 ***** 2026-02-28 01:01:40.313721 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313726 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.313732 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.313738 | orchestrator | 2026-02-28 01:01:40.313751 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.313757 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.338) 0:00:06.222 ***** 2026-02-28 01:01:40.313763 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.313769 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.313775 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.313780 | orchestrator | 2026-02-28 01:01:40.313786 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.313792 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.361) 0:00:06.583 ***** 2026-02-28 01:01:40.313798 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313804 | orchestrator | 2026-02-28 01:01:40.313810 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.313816 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.362) 0:00:06.946 ***** 2026-02-28 01:01:40.313822 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313827 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.313833 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.313839 | orchestrator | 2026-02-28 01:01:40.313845 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.313851 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.328) 0:00:07.275 ***** 2026-02-28 01:01:40.313857 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.313863 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.313869 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.313879 | orchestrator | 2026-02-28 01:01:40.313888 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.313898 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.341) 0:00:07.616 ***** 2026-02-28 01:01:40.313907 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313916 | orchestrator | 2026-02-28 01:01:40.313925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.313934 | orchestrator | Saturday 28 February 2026 00:59:57 +0000 (0:00:00.127) 0:00:07.744 ***** 2026-02-28 01:01:40.313942 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.313951 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.313968 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.313978 | orchestrator | 2026-02-28 01:01:40.313988 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.313998 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:00.334) 0:00:08.079 ***** 2026-02-28 01:01:40.314008 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.314056 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.314065 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.314070 | orchestrator | 2026-02-28 01:01:40.314076 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.314082 | orchestrator | Saturday 28 February 2026 00:59:58 +0000 (0:00:00.567) 0:00:08.647 ***** 2026-02-28 01:01:40.314088 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314094 | orchestrator | 2026-02-28 01:01:40.314100 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.314106 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.163) 0:00:08.810 ***** 2026-02-28 01:01:40.314112 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314118 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314123 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314129 | orchestrator | 2026-02-28 01:01:40.314135 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.314141 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.332) 0:00:09.143 ***** 2026-02-28 01:01:40.314147 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.314153 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.314159 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.314164 | orchestrator | 2026-02-28 01:01:40.314170 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.314182 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.351) 0:00:09.494 ***** 2026-02-28 01:01:40.314188 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314194 | orchestrator | 2026-02-28 01:01:40.314200 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.314206 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:00.147) 0:00:09.641 ***** 2026-02-28 01:01:40.314211 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314217 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314223 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314229 | orchestrator | 2026-02-28 01:01:40.314235 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.314248 | orchestrator | Saturday 28 February 2026 01:00:00 +0000 (0:00:00.404) 0:00:10.046 ***** 2026-02-28 01:01:40.314254 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.314260 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.314266 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.314272 | orchestrator | 2026-02-28 01:01:40.314278 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.314283 | orchestrator | Saturday 28 February 2026 01:00:00 +0000 (0:00:00.676) 0:00:10.723 ***** 2026-02-28 01:01:40.314289 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314295 | orchestrator | 2026-02-28 01:01:40.314301 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.314307 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.143) 0:00:10.866 ***** 2026-02-28 01:01:40.314313 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314319 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314324 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314330 | orchestrator | 2026-02-28 01:01:40.314336 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.314342 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.334) 0:00:11.201 ***** 2026-02-28 01:01:40.314348 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.314354 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.314360 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.314365 | orchestrator | 2026-02-28 01:01:40.314371 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.314377 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.448) 0:00:11.649 ***** 2026-02-28 01:01:40.314383 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314389 | orchestrator | 2026-02-28 01:01:40.314395 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.314401 | orchestrator | Saturday 28 February 2026 01:00:02 +0000 (0:00:00.218) 0:00:11.868 ***** 2026-02-28 01:01:40.314406 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314412 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314418 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314424 | orchestrator | 2026-02-28 01:01:40.314430 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.314436 | orchestrator | Saturday 28 February 2026 01:00:02 +0000 (0:00:00.619) 0:00:12.488 ***** 2026-02-28 01:01:40.314441 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.314447 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.314453 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.314459 | orchestrator | 2026-02-28 01:01:40.314465 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.314471 | orchestrator | Saturday 28 February 2026 01:00:03 +0000 (0:00:00.336) 0:00:12.825 ***** 2026-02-28 01:01:40.314477 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314482 | orchestrator | 2026-02-28 01:01:40.314488 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.314494 | orchestrator | Saturday 28 February 2026 01:00:03 +0000 (0:00:00.158) 0:00:12.983 ***** 2026-02-28 01:01:40.314500 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314510 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314516 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314522 | orchestrator | 2026-02-28 01:01:40.314528 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-28 01:01:40.314534 | orchestrator | Saturday 28 February 2026 01:00:03 +0000 (0:00:00.344) 0:00:13.328 ***** 2026-02-28 01:01:40.314540 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:01:40.314546 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:01:40.314552 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:01:40.314557 | orchestrator | 2026-02-28 01:01:40.314563 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-28 01:01:40.314573 | orchestrator | Saturday 28 February 2026 01:00:03 +0000 (0:00:00.355) 0:00:13.683 ***** 2026-02-28 01:01:40.314579 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314585 | orchestrator | 2026-02-28 01:01:40.314591 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-28 01:01:40.314597 | orchestrator | Saturday 28 February 2026 01:00:04 +0000 (0:00:00.154) 0:00:13.838 ***** 2026-02-28 01:01:40.314603 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314608 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314614 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314620 | orchestrator | 2026-02-28 01:01:40.314626 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-28 01:01:40.314666 | orchestrator | Saturday 28 February 2026 01:00:04 +0000 (0:00:00.562) 0:00:14.401 ***** 2026-02-28 01:01:40.314673 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:40.314678 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:40.314684 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:40.314690 | orchestrator | 2026-02-28 01:01:40.314696 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-28 01:01:40.314702 | orchestrator | Saturday 28 February 2026 01:00:06 +0000 (0:00:01.870) 0:00:16.272 ***** 2026-02-28 01:01:40.314708 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:01:40.314714 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:01:40.314720 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-28 01:01:40.314725 | orchestrator | 2026-02-28 01:01:40.314731 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-28 01:01:40.314737 | orchestrator | Saturday 28 February 2026 01:00:08 +0000 (0:00:02.092) 0:00:18.364 ***** 2026-02-28 01:01:40.314743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:01:40.314750 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:01:40.314756 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-28 01:01:40.314762 | orchestrator | 2026-02-28 01:01:40.314768 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-28 01:01:40.314777 | orchestrator | Saturday 28 February 2026 01:00:11 +0000 (0:00:02.784) 0:00:21.148 ***** 2026-02-28 01:01:40.314783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:01:40.314789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:01:40.314795 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-28 01:01:40.314801 | orchestrator | 2026-02-28 01:01:40.314806 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-28 01:01:40.314812 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:02.225) 0:00:23.373 ***** 2026-02-28 01:01:40.314818 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314824 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314841 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314847 | orchestrator | 2026-02-28 01:01:40.314853 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-28 01:01:40.314858 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:00.314) 0:00:23.688 ***** 2026-02-28 01:01:40.314864 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.314870 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.314876 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.314882 | orchestrator | 2026-02-28 01:01:40.314888 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:40.314894 | orchestrator | Saturday 28 February 2026 01:00:14 +0000 (0:00:00.324) 0:00:24.012 ***** 2026-02-28 01:01:40.314900 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:40.314906 | orchestrator | 2026-02-28 01:01:40.314912 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-28 01:01:40.314917 | orchestrator | Saturday 28 February 2026 01:00:15 +0000 (0:00:00.889) 0:00:24.902 ***** 2026-02-28 01:01:40.314929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.314943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.314958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.314965 | orchestrator | 2026-02-28 01:01:40.314971 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-28 01:01:40.314977 | orchestrator | Saturday 28 February 2026 01:00:16 +0000 (0:00:01.653) 0:00:26.555 ***** 2026-02-28 01:01:40.314988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:40.314999 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.315016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:40.315034 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.315044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:40.315055 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.315064 | orchestrator | 2026-02-28 01:01:40.315074 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-28 01:01:40.315088 | orchestrator | Saturday 28 February 2026 01:00:17 +0000 (0:00:00.708) 0:00:27.264 ***** 2026-02-28 01:01:40.315105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:40.315123 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.315140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:40.315152 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.315171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-28 01:01:40.315184 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.315190 | orchestrator | 2026-02-28 01:01:40.315196 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-28 01:01:40.315203 | orchestrator | Saturday 28 February 2026 01:00:18 +0000 (0:00:00.947) 0:00:28.211 ***** 2026-02-28 01:01:40.315214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.315226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.315247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-28 01:01:40.315254 | orchestrator | 2026-02-28 01:01:40.315260 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:40.315272 | orchestrator | Saturday 28 February 2026 01:00:20 +0000 (0:00:01.899) 0:00:30.111 ***** 2026-02-28 01:01:40.315279 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:01:40.315285 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:01:40.315291 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:01:40.315297 | orchestrator | 2026-02-28 01:01:40.315304 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-28 01:01:40.315310 | orchestrator | Saturday 28 February 2026 01:00:20 +0000 (0:00:00.436) 0:00:30.547 ***** 2026-02-28 01:01:40.315316 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:01:40.315322 | orchestrator | 2026-02-28 01:01:40.315329 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-28 01:01:40.315338 | orchestrator | Saturday 28 February 2026 01:00:21 +0000 (0:00:00.598) 0:00:31.145 ***** 2026-02-28 01:01:40.315345 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:40.315351 | orchestrator | 2026-02-28 01:01:40.315358 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-28 01:01:40.315364 | orchestrator | Saturday 28 February 2026 01:00:24 +0000 (0:00:02.875) 0:00:34.021 ***** 2026-02-28 01:01:40.315370 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:40.315377 | orchestrator | 2026-02-28 01:01:40.315383 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-28 01:01:40.315389 | orchestrator | Saturday 28 February 2026 01:00:27 +0000 (0:00:02.941) 0:00:36.963 ***** 2026-02-28 01:01:40.315396 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:40.315402 | orchestrator | 2026-02-28 01:01:40.315408 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:01:40.315414 | orchestrator | Saturday 28 February 2026 01:00:44 +0000 (0:00:17.289) 0:00:54.253 ***** 2026-02-28 01:01:40.315421 | orchestrator | 2026-02-28 01:01:40.315427 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:01:40.315433 | orchestrator | Saturday 28 February 2026 01:00:44 +0000 (0:00:00.076) 0:00:54.329 ***** 2026-02-28 01:01:40.315440 | orchestrator | 2026-02-28 01:01:40.315446 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-28 01:01:40.315452 | orchestrator | Saturday 28 February 2026 01:00:44 +0000 (0:00:00.068) 0:00:54.398 ***** 2026-02-28 01:01:40.315459 | orchestrator | 2026-02-28 01:01:40.315465 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-28 01:01:40.315471 | orchestrator | Saturday 28 February 2026 01:00:44 +0000 (0:00:00.069) 0:00:54.468 ***** 2026-02-28 01:01:40.315477 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:01:40.315484 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:01:40.315490 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:01:40.315496 | orchestrator | 2026-02-28 01:01:40.315503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:01:40.315509 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-28 01:01:40.315516 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-28 01:01:40.315522 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-28 01:01:40.315529 | orchestrator | 2026-02-28 01:01:40.315535 | orchestrator | 2026-02-28 01:01:40.315541 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:01:40.315548 | orchestrator | Saturday 28 February 2026 01:01:38 +0000 (0:00:53.957) 0:01:48.425 ***** 2026-02-28 01:01:40.315554 | orchestrator | =============================================================================== 2026-02-28 01:01:40.315560 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.96s 2026-02-28 01:01:40.315573 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.29s 2026-02-28 01:01:40.315580 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.94s 2026-02-28 01:01:40.315586 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.88s 2026-02-28 01:01:40.315595 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.78s 2026-02-28 01:01:40.315602 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.23s 2026-02-28 01:01:40.315608 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.09s 2026-02-28 01:01:40.315615 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.90s 2026-02-28 01:01:40.315621 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.87s 2026-02-28 01:01:40.315660 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.65s 2026-02-28 01:01:40.315669 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.21s 2026-02-28 01:01:40.315675 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.95s 2026-02-28 01:01:40.315681 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.89s 2026-02-28 01:01:40.315688 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2026-02-28 01:01:40.315694 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-02-28 01:01:40.315700 | orchestrator | horizon : Update policy file name --------------------------------------- 0.68s 2026-02-28 01:01:40.315706 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.62s 2026-02-28 01:01:40.315713 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-02-28 01:01:40.315719 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-02-28 01:01:40.315725 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-02-28 01:01:40.317610 | orchestrator | 2026-02-28 01:01:40 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:40.317671 | orchestrator | 2026-02-28 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:43.351430 | orchestrator | 2026-02-28 01:01:43 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:43.353074 | orchestrator | 2026-02-28 01:01:43 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:43.353152 | orchestrator | 2026-02-28 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:46.398859 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state STARTED 2026-02-28 01:01:46.401213 | orchestrator | 2026-02-28 01:01:46 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:46.401259 | orchestrator | 2026-02-28 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:49.451057 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task be927962-c00d-4200-b711-6b5e299ec696 is in state SUCCESS 2026-02-28 01:01:49.451270 | orchestrator | 2026-02-28 01:01:49 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:49.451293 | orchestrator | 2026-02-28 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:52.510741 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:52.513698 | orchestrator | 2026-02-28 01:01:52 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:01:52.513765 | orchestrator | 2026-02-28 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:55.560193 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:55.561059 | orchestrator | 2026-02-28 01:01:55 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:01:55.561097 | orchestrator | 2026-02-28 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:01:58.602714 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:01:58.605907 | orchestrator | 2026-02-28 01:01:58 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:01:58.606004 | orchestrator | 2026-02-28 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:01.656337 | orchestrator | 2026-02-28 01:02:01 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:01.658426 | orchestrator | 2026-02-28 01:02:01 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:01.658450 | orchestrator | 2026-02-28 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:04.711520 | orchestrator | 2026-02-28 01:02:04 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:04.712460 | orchestrator | 2026-02-28 01:02:04 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:04.712528 | orchestrator | 2026-02-28 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:07.751237 | orchestrator | 2026-02-28 01:02:07 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:07.752284 | orchestrator | 2026-02-28 01:02:07 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:07.752337 | orchestrator | 2026-02-28 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:10.799731 | orchestrator | 2026-02-28 01:02:10 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:10.800492 | orchestrator | 2026-02-28 01:02:10 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:10.800918 | orchestrator | 2026-02-28 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:13.856332 | orchestrator | 2026-02-28 01:02:13 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:13.857450 | orchestrator | 2026-02-28 01:02:13 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:13.857481 | orchestrator | 2026-02-28 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:16.907199 | orchestrator | 2026-02-28 01:02:16 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:16.907322 | orchestrator | 2026-02-28 01:02:16 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:16.907347 | orchestrator | 2026-02-28 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:19.957399 | orchestrator | 2026-02-28 01:02:19 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:19.959230 | orchestrator | 2026-02-28 01:02:19 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:19.959274 | orchestrator | 2026-02-28 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:23.014540 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:23.016844 | orchestrator | 2026-02-28 01:02:23 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:23.016872 | orchestrator | 2026-02-28 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:26.072034 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:26.072518 | orchestrator | 2026-02-28 01:02:26 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:26.072550 | orchestrator | 2026-02-28 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:29.125743 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:29.130358 | orchestrator | 2026-02-28 01:02:29 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:29.130430 | orchestrator | 2026-02-28 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:32.173802 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:32.175229 | orchestrator | 2026-02-28 01:02:32 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:32.175310 | orchestrator | 2026-02-28 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:35.221836 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:35.225472 | orchestrator | 2026-02-28 01:02:35 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:35.225559 | orchestrator | 2026-02-28 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:38.276842 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state STARTED 2026-02-28 01:02:38.281231 | orchestrator | 2026-02-28 01:02:38 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:38.281301 | orchestrator | 2026-02-28 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:41.325204 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 755b7618-8f84-4768-a524-2db61b05532e is in state SUCCESS 2026-02-28 01:02:41.325334 | orchestrator | 2026-02-28 01:02:41.325345 | orchestrator | 2026-02-28 01:02:41.325353 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-28 01:02:41.325361 | orchestrator | 2026-02-28 01:02:41.325369 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-28 01:02:41.325377 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.195) 0:00:00.195 ***** 2026-02-28 01:02:41.325399 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.325408 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.325416 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.325423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.325431 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.325438 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.325486 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.325495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.325502 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.325510 | orchestrator | 2026-02-28 01:02:41.325517 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-28 01:02:41.325544 | orchestrator | Saturday 28 February 2026 01:01:15 +0000 (0:00:04.956) 0:00:05.152 ***** 2026-02-28 01:02:41.325697 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.325707 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.325714 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.325819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.325829 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.325836 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.325843 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.325851 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.325859 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.325888 | orchestrator | 2026-02-28 01:02:41.325897 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-28 01:02:41.325905 | orchestrator | Saturday 28 February 2026 01:01:19 +0000 (0:00:04.269) 0:00:09.422 ***** 2026-02-28 01:02:41.325915 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-28 01:02:41.325924 | orchestrator | 2026-02-28 01:02:41.325932 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-28 01:02:41.325940 | orchestrator | Saturday 28 February 2026 01:01:20 +0000 (0:00:01.219) 0:00:10.641 ***** 2026-02-28 01:02:41.325949 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.325997 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.326006 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.326518 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.326863 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.326875 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.326882 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.326890 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.326897 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.326906 | orchestrator | 2026-02-28 01:02:41.326913 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-28 01:02:41.326921 | orchestrator | Saturday 28 February 2026 01:01:37 +0000 (0:00:16.342) 0:00:26.984 ***** 2026-02-28 01:02:41.326928 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-28 01:02:41.326936 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-28 01:02:41.326944 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-28 01:02:41.326995 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-28 01:02:41.327014 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-28 01:02:41.327026 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-28 01:02:41.327038 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-28 01:02:41.327075 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-28 01:02:41.327088 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-28 01:02:41.327101 | orchestrator | 2026-02-28 01:02:41.327113 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-28 01:02:41.327124 | orchestrator | Saturday 28 February 2026 01:01:40 +0000 (0:00:03.358) 0:00:30.343 ***** 2026-02-28 01:02:41.327136 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-28 01:02:41.327148 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.327159 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.327171 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:02:41.327183 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-28 01:02:41.327195 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-28 01:02:41.327207 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-28 01:02:41.327219 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-28 01:02:41.327231 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-28 01:02:41.327244 | orchestrator | 2026-02-28 01:02:41.327253 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:02:41.327260 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:02:41.327270 | orchestrator | 2026-02-28 01:02:41.327277 | orchestrator | 2026-02-28 01:02:41.327284 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:02:41.327291 | orchestrator | Saturday 28 February 2026 01:01:48 +0000 (0:00:07.510) 0:00:37.853 ***** 2026-02-28 01:02:41.327299 | orchestrator | =============================================================================== 2026-02-28 01:02:41.327306 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.34s 2026-02-28 01:02:41.327313 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.51s 2026-02-28 01:02:41.327321 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.96s 2026-02-28 01:02:41.327328 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.27s 2026-02-28 01:02:41.327335 | orchestrator | Check if target directories exist --------------------------------------- 3.36s 2026-02-28 01:02:41.327342 | orchestrator | Create share directory -------------------------------------------------- 1.22s 2026-02-28 01:02:41.327349 | orchestrator | 2026-02-28 01:02:41.327357 | orchestrator | 2026-02-28 01:02:41.327364 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:02:41.327371 | orchestrator | 2026-02-28 01:02:41.327378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:02:41.327386 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:00.289) 0:00:00.289 ***** 2026-02-28 01:02:41.327393 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.327401 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.327408 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.327415 | orchestrator | 2026-02-28 01:02:41.327423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:02:41.327430 | orchestrator | Saturday 28 February 2026 00:59:50 +0000 (0:00:00.332) 0:00:00.622 ***** 2026-02-28 01:02:41.327437 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-28 01:02:41.327445 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-28 01:02:41.327453 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-28 01:02:41.327460 | orchestrator | 2026-02-28 01:02:41.327467 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-28 01:02:41.327482 | orchestrator | 2026-02-28 01:02:41.327490 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.327497 | orchestrator | Saturday 28 February 2026 00:59:51 +0000 (0:00:00.492) 0:00:01.115 ***** 2026-02-28 01:02:41.327504 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:02:41.327511 | orchestrator | 2026-02-28 01:02:41.327519 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-28 01:02:41.327526 | orchestrator | Saturday 28 February 2026 00:59:52 +0000 (0:00:00.630) 0:00:01.745 ***** 2026-02-28 01:02:41.327574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.327587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.327596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.327605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327704 | orchestrator | 2026-02-28 01:02:41.327712 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-28 01:02:41.327719 | orchestrator | Saturday 28 February 2026 00:59:53 +0000 (0:00:01.853) 0:00:03.599 ***** 2026-02-28 01:02:41.327727 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.327734 | orchestrator | 2026-02-28 01:02:41.327741 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-28 01:02:41.327749 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.186) 0:00:03.785 ***** 2026-02-28 01:02:41.327761 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.327769 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.327776 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.327783 | orchestrator | 2026-02-28 01:02:41.327791 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-28 01:02:41.327798 | orchestrator | Saturday 28 February 2026 00:59:54 +0000 (0:00:00.504) 0:00:04.290 ***** 2026-02-28 01:02:41.327805 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:02:41.327813 | orchestrator | 2026-02-28 01:02:41.327820 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.327827 | orchestrator | Saturday 28 February 2026 00:59:55 +0000 (0:00:00.984) 0:00:05.275 ***** 2026-02-28 01:02:41.327834 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:02:41.327842 | orchestrator | 2026-02-28 01:02:41.327849 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-28 01:02:41.327856 | orchestrator | Saturday 28 February 2026 00:59:56 +0000 (0:00:00.591) 0:00:05.867 ***** 2026-02-28 01:02:41.327870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.327883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.327892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.327905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.327962 | orchestrator | 2026-02-28 01:02:41.327975 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-28 01:02:41.327982 | orchestrator | Saturday 28 February 2026 00:59:59 +0000 (0:00:03.636) 0:00:09.503 ***** 2026-02-28 01:02:41.327990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.327998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328018 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328063 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328104 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328111 | orchestrator | 2026-02-28 01:02:41.328119 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-28 01:02:41.328126 | orchestrator | Saturday 28 February 2026 01:00:00 +0000 (0:00:00.726) 0:00:10.230 ***** 2026-02-28 01:02:41.328134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328164 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328209 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328240 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328247 | orchestrator | 2026-02-28 01:02:41.328254 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-28 01:02:41.328262 | orchestrator | Saturday 28 February 2026 01:00:01 +0000 (0:00:00.960) 0:00:11.191 ***** 2026-02-28 01:02:41.328279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.328288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.328302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.328310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328369 | orchestrator | 2026-02-28 01:02:41.328377 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-28 01:02:41.328384 | orchestrator | Saturday 28 February 2026 01:00:05 +0000 (0:00:03.823) 0:00:15.014 ***** 2026-02-28 01:02:41.328392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.328399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.328432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.328448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.328492 | orchestrator | 2026-02-28 01:02:41.328500 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-28 01:02:41.328507 | orchestrator | Saturday 28 February 2026 01:00:11 +0000 (0:00:06.126) 0:00:21.141 ***** 2026-02-28 01:02:41.328515 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.328522 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.328529 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.328537 | orchestrator | 2026-02-28 01:02:41.328544 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-28 01:02:41.328551 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:01.681) 0:00:22.822 ***** 2026-02-28 01:02:41.328558 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328566 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328573 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328580 | orchestrator | 2026-02-28 01:02:41.328681 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-28 01:02:41.328691 | orchestrator | Saturday 28 February 2026 01:00:13 +0000 (0:00:00.603) 0:00:23.426 ***** 2026-02-28 01:02:41.328699 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328706 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328713 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328720 | orchestrator | 2026-02-28 01:02:41.328728 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-28 01:02:41.328735 | orchestrator | Saturday 28 February 2026 01:00:14 +0000 (0:00:00.346) 0:00:23.772 ***** 2026-02-28 01:02:41.328742 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328749 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328757 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328764 | orchestrator | 2026-02-28 01:02:41.328771 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-28 01:02:41.328779 | orchestrator | Saturday 28 February 2026 01:00:14 +0000 (0:00:00.558) 0:00:24.330 ***** 2026-02-28 01:02:41.328787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328824 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328860 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-28 01:02:41.328887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-28 01:02:41.328899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-28 01:02:41.328907 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328914 | orchestrator | 2026-02-28 01:02:41.328921 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.328929 | orchestrator | Saturday 28 February 2026 01:00:15 +0000 (0:00:00.680) 0:00:25.011 ***** 2026-02-28 01:02:41.328936 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.328943 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.328951 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.328958 | orchestrator | 2026-02-28 01:02:41.328965 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-28 01:02:41.328972 | orchestrator | Saturday 28 February 2026 01:00:15 +0000 (0:00:00.349) 0:00:25.361 ***** 2026-02-28 01:02:41.328980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:02:41.328987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:02:41.328994 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-28 01:02:41.329002 | orchestrator | 2026-02-28 01:02:41.329009 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-28 01:02:41.329016 | orchestrator | Saturday 28 February 2026 01:00:17 +0000 (0:00:01.596) 0:00:26.958 ***** 2026-02-28 01:02:41.329023 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:02:41.329031 | orchestrator | 2026-02-28 01:02:41.329038 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-28 01:02:41.329045 | orchestrator | Saturday 28 February 2026 01:00:18 +0000 (0:00:01.106) 0:00:28.064 ***** 2026-02-28 01:02:41.329052 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.329060 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.329067 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.329074 | orchestrator | 2026-02-28 01:02:41.329081 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-28 01:02:41.329089 | orchestrator | Saturday 28 February 2026 01:00:19 +0000 (0:00:01.012) 0:00:29.077 ***** 2026-02-28 01:02:41.329096 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 01:02:41.329103 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:02:41.329110 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 01:02:41.329118 | orchestrator | 2026-02-28 01:02:41.329125 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-28 01:02:41.329137 | orchestrator | Saturday 28 February 2026 01:00:20 +0000 (0:00:01.423) 0:00:30.500 ***** 2026-02-28 01:02:41.329145 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.329152 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.329159 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.329167 | orchestrator | 2026-02-28 01:02:41.329174 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-28 01:02:41.329181 | orchestrator | Saturday 28 February 2026 01:00:21 +0000 (0:00:00.356) 0:00:30.857 ***** 2026-02-28 01:02:41.329188 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:02:41.329195 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:02:41.329203 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-28 01:02:41.329210 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:02:41.329217 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:02:41.329224 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-28 01:02:41.329232 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:02:41.329239 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:02:41.329246 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-28 01:02:41.329254 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:02:41.329261 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:02:41.329268 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-28 01:02:41.329275 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:02:41.329286 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:02:41.329294 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-28 01:02:41.329301 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:02:41.329309 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:02:41.329321 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:02:41.329331 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:02:41.329339 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:02:41.329347 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:02:41.329355 | orchestrator | 2026-02-28 01:02:41.329364 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-28 01:02:41.329373 | orchestrator | Saturday 28 February 2026 01:00:30 +0000 (0:00:09.541) 0:00:40.398 ***** 2026-02-28 01:02:41.329381 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:02:41.329389 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:02:41.329398 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:02:41.329406 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:02:41.329414 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:02:41.329427 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:02:41.329436 | orchestrator | 2026-02-28 01:02:41.329444 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-28 01:02:41.329452 | orchestrator | Saturday 28 February 2026 01:00:33 +0000 (0:00:03.210) 0:00:43.609 ***** 2026-02-28 01:02:41.329461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.329471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.329489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-28 01:02:41.329499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.329512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.329521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-28 01:02:41.329529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.329537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.329548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-28 01:02:41.329556 | orchestrator | 2026-02-28 01:02:41.329563 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.329571 | orchestrator | Saturday 28 February 2026 01:00:36 +0000 (0:00:02.440) 0:00:46.050 ***** 2026-02-28 01:02:41.329578 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.329586 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.329593 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.329600 | orchestrator | 2026-02-28 01:02:41.329611 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-28 01:02:41.329619 | orchestrator | Saturday 28 February 2026 01:00:36 +0000 (0:00:00.309) 0:00:46.359 ***** 2026-02-28 01:02:41.329626 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.329661 | orchestrator | 2026-02-28 01:02:41.329674 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-28 01:02:41.329682 | orchestrator | Saturday 28 February 2026 01:00:39 +0000 (0:00:02.350) 0:00:48.709 ***** 2026-02-28 01:02:41.329689 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.329696 | orchestrator | 2026-02-28 01:02:41.329704 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-28 01:02:41.329711 | orchestrator | Saturday 28 February 2026 01:00:41 +0000 (0:00:02.335) 0:00:51.045 ***** 2026-02-28 01:02:41.329718 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.329726 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.329733 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.329741 | orchestrator | 2026-02-28 01:02:41.329748 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-28 01:02:41.329755 | orchestrator | Saturday 28 February 2026 01:00:42 +0000 (0:00:01.191) 0:00:52.236 ***** 2026-02-28 01:02:41.329763 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.329770 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.329778 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.329785 | orchestrator | 2026-02-28 01:02:41.329792 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-28 01:02:41.329800 | orchestrator | Saturday 28 February 2026 01:00:42 +0000 (0:00:00.388) 0:00:52.625 ***** 2026-02-28 01:02:41.329807 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.329814 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.329822 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.329829 | orchestrator | 2026-02-28 01:02:41.329836 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-28 01:02:41.329844 | orchestrator | Saturday 28 February 2026 01:00:43 +0000 (0:00:00.339) 0:00:52.965 ***** 2026-02-28 01:02:41.329851 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.329858 | orchestrator | 2026-02-28 01:02:41.329866 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-28 01:02:41.329873 | orchestrator | Saturday 28 February 2026 01:00:59 +0000 (0:00:15.771) 0:01:08.737 ***** 2026-02-28 01:02:41.329881 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.329888 | orchestrator | 2026-02-28 01:02:41.329895 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:02:41.329903 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:11.453) 0:01:20.190 ***** 2026-02-28 01:02:41.329910 | orchestrator | 2026-02-28 01:02:41.329917 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:02:41.329925 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.074) 0:01:20.264 ***** 2026-02-28 01:02:41.329932 | orchestrator | 2026-02-28 01:02:41.329939 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-28 01:02:41.329947 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.063) 0:01:20.328 ***** 2026-02-28 01:02:41.329954 | orchestrator | 2026-02-28 01:02:41.329961 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-28 01:02:41.329969 | orchestrator | Saturday 28 February 2026 01:01:10 +0000 (0:00:00.079) 0:01:20.407 ***** 2026-02-28 01:02:41.329976 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.329983 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.329991 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.329998 | orchestrator | 2026-02-28 01:02:41.330005 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-28 01:02:41.330037 | orchestrator | Saturday 28 February 2026 01:01:30 +0000 (0:00:19.503) 0:01:39.911 ***** 2026-02-28 01:02:41.330047 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.330054 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.330062 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.330069 | orchestrator | 2026-02-28 01:02:41.330076 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-28 01:02:41.330084 | orchestrator | Saturday 28 February 2026 01:01:40 +0000 (0:00:09.976) 0:01:49.887 ***** 2026-02-28 01:02:41.330097 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.330104 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:02:41.330111 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:02:41.330119 | orchestrator | 2026-02-28 01:02:41.330126 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.330133 | orchestrator | Saturday 28 February 2026 01:01:47 +0000 (0:00:06.843) 0:01:56.731 ***** 2026-02-28 01:02:41.330141 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:02:41.330148 | orchestrator | 2026-02-28 01:02:41.330155 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-28 01:02:41.330163 | orchestrator | Saturday 28 February 2026 01:01:47 +0000 (0:00:00.846) 0:01:57.577 ***** 2026-02-28 01:02:41.330170 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.330177 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:02:41.330185 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:02:41.330192 | orchestrator | 2026-02-28 01:02:41.330199 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-28 01:02:41.330206 | orchestrator | Saturday 28 February 2026 01:01:48 +0000 (0:00:00.879) 0:01:58.457 ***** 2026-02-28 01:02:41.330214 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:02:41.330221 | orchestrator | 2026-02-28 01:02:41.330229 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-28 01:02:41.330240 | orchestrator | Saturday 28 February 2026 01:01:50 +0000 (0:00:02.016) 0:02:00.473 ***** 2026-02-28 01:02:41.330248 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-28 01:02:41.330256 | orchestrator | 2026-02-28 01:02:41.330263 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-28 01:02:41.330270 | orchestrator | Saturday 28 February 2026 01:02:03 +0000 (0:00:12.776) 0:02:13.249 ***** 2026-02-28 01:02:41.330277 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-28 01:02:41.330285 | orchestrator | 2026-02-28 01:02:41.330296 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-28 01:02:41.330304 | orchestrator | Saturday 28 February 2026 01:02:28 +0000 (0:00:24.782) 0:02:38.032 ***** 2026-02-28 01:02:41.330311 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-28 01:02:41.330319 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-28 01:02:41.330326 | orchestrator | 2026-02-28 01:02:41.330333 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-28 01:02:41.330340 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:06.995) 0:02:45.027 ***** 2026-02-28 01:02:41.330347 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.330355 | orchestrator | 2026-02-28 01:02:41.330362 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-28 01:02:41.330369 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:00.159) 0:02:45.187 ***** 2026-02-28 01:02:41.330377 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.330384 | orchestrator | 2026-02-28 01:02:41.330391 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-28 01:02:41.330398 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:00.117) 0:02:45.304 ***** 2026-02-28 01:02:41.330406 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.330413 | orchestrator | 2026-02-28 01:02:41.330420 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-28 01:02:41.330427 | orchestrator | Saturday 28 February 2026 01:02:35 +0000 (0:00:00.127) 0:02:45.431 ***** 2026-02-28 01:02:41.330435 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.330442 | orchestrator | 2026-02-28 01:02:41.330449 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-28 01:02:41.330456 | orchestrator | Saturday 28 February 2026 01:02:36 +0000 (0:00:00.545) 0:02:45.976 ***** 2026-02-28 01:02:41.330469 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:02:41.330476 | orchestrator | 2026-02-28 01:02:41.330483 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-28 01:02:41.330491 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:03.243) 0:02:49.220 ***** 2026-02-28 01:02:41.330498 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:02:41.330505 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:02:41.330513 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:02:41.330520 | orchestrator | 2026-02-28 01:02:41.330527 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:02:41.330535 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-28 01:02:41.330543 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:02:41.330551 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:02:41.330558 | orchestrator | 2026-02-28 01:02:41.330565 | orchestrator | 2026-02-28 01:02:41.330573 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:02:41.330580 | orchestrator | Saturday 28 February 2026 01:02:39 +0000 (0:00:00.426) 0:02:49.647 ***** 2026-02-28 01:02:41.330587 | orchestrator | =============================================================================== 2026-02-28 01:02:41.330594 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.78s 2026-02-28 01:02:41.330601 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.50s 2026-02-28 01:02:41.330609 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.77s 2026-02-28 01:02:41.330616 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.78s 2026-02-28 01:02:41.330623 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.45s 2026-02-28 01:02:41.330650 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.98s 2026-02-28 01:02:41.330658 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.54s 2026-02-28 01:02:41.330665 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.00s 2026-02-28 01:02:41.330672 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.84s 2026-02-28 01:02:41.330680 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.13s 2026-02-28 01:02:41.330687 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.82s 2026-02-28 01:02:41.330694 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.64s 2026-02-28 01:02:41.330701 | orchestrator | keystone : Creating default user role ----------------------------------- 3.24s 2026-02-28 01:02:41.330708 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.21s 2026-02-28 01:02:41.330716 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.44s 2026-02-28 01:02:41.330723 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.35s 2026-02-28 01:02:41.330733 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.34s 2026-02-28 01:02:41.330741 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.02s 2026-02-28 01:02:41.330748 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.85s 2026-02-28 01:02:41.330756 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.68s 2026-02-28 01:02:41.330767 | orchestrator | 2026-02-28 01:02:41 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:41.330775 | orchestrator | 2026-02-28 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:44.389267 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:02:44.389383 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:02:44.389395 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:02:44.389403 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:44.389410 | orchestrator | 2026-02-28 01:02:44 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:02:44.389419 | orchestrator | 2026-02-28 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:47.403984 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:02:47.404432 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:02:47.408792 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:02:47.408873 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:47.410108 | orchestrator | 2026-02-28 01:02:47 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:02:47.410142 | orchestrator | 2026-02-28 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:50.462315 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:02:50.462477 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:02:50.465434 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:02:50.468990 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:50.470799 | orchestrator | 2026-02-28 01:02:50 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:02:50.470832 | orchestrator | 2026-02-28 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:53.523524 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:02:53.526202 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:02:53.530421 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:02:53.532851 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state STARTED 2026-02-28 01:02:53.535560 | orchestrator | 2026-02-28 01:02:53 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:02:53.535618 | orchestrator | 2026-02-28 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:56.583299 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:02:56.585388 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:02:56.589395 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:02:56.595625 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 5e1d16f7-74ec-42fe-b07d-bb8bad940554 is in state SUCCESS 2026-02-28 01:02:56.598456 | orchestrator | 2026-02-28 01:02:56 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:02:56.598530 | orchestrator | 2026-02-28 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:02:59.649039 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:02:59.650560 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:02:59.652566 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:02:59.653996 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:02:59.655558 | orchestrator | 2026-02-28 01:02:59 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:02:59.655611 | orchestrator | 2026-02-28 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:02.701599 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:02.703272 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:02.705182 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:02.707241 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:02.708687 | orchestrator | 2026-02-28 01:03:02 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:02.708844 | orchestrator | 2026-02-28 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:05.755303 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:05.757980 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:05.760764 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:05.762905 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:05.765422 | orchestrator | 2026-02-28 01:03:05 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:05.765461 | orchestrator | 2026-02-28 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:08.823035 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:08.823527 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:08.825109 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:08.826868 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:08.828953 | orchestrator | 2026-02-28 01:03:08 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:08.829003 | orchestrator | 2026-02-28 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:11.882749 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:11.883464 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:11.884820 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:11.887179 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:11.888537 | orchestrator | 2026-02-28 01:03:11 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:11.888600 | orchestrator | 2026-02-28 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:14.937124 | orchestrator | 2026-02-28 01:03:14 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:14.938391 | orchestrator | 2026-02-28 01:03:14 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:14.940401 | orchestrator | 2026-02-28 01:03:14 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:14.941714 | orchestrator | 2026-02-28 01:03:14 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:14.942304 | orchestrator | 2026-02-28 01:03:14 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:14.942343 | orchestrator | 2026-02-28 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:17.998541 | orchestrator | 2026-02-28 01:03:17 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:18.000807 | orchestrator | 2026-02-28 01:03:17 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:18.003117 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:18.005880 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:18.005920 | orchestrator | 2026-02-28 01:03:18 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:18.005928 | orchestrator | 2026-02-28 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:21.062690 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:21.063127 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:21.065105 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:21.066275 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:21.067623 | orchestrator | 2026-02-28 01:03:21 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:21.067702 | orchestrator | 2026-02-28 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:24.119586 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:24.121362 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:24.122689 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:24.124270 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:24.125891 | orchestrator | 2026-02-28 01:03:24 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:24.125925 | orchestrator | 2026-02-28 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:27.153227 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:27.153340 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:27.153711 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:27.154593 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:27.155136 | orchestrator | 2026-02-28 01:03:27 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:27.155163 | orchestrator | 2026-02-28 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:30.187005 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:30.187779 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state STARTED 2026-02-28 01:03:30.189995 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:30.190592 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:30.191498 | orchestrator | 2026-02-28 01:03:30 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:30.191552 | orchestrator | 2026-02-28 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:33.218494 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:33.218585 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task a8ac8af2-f365-45c1-909b-e8529d44a399 is in state SUCCESS 2026-02-28 01:03:33.218981 | orchestrator | 2026-02-28 01:03:33.219005 | orchestrator | 2026-02-28 01:03:33.219013 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-28 01:03:33.219022 | orchestrator | 2026-02-28 01:03:33.219029 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-28 01:03:33.219038 | orchestrator | Saturday 28 February 2026 01:01:53 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-02-28 01:03:33.219046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-28 01:03:33.219054 | orchestrator | 2026-02-28 01:03:33.219061 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-28 01:03:33.219068 | orchestrator | Saturday 28 February 2026 01:01:54 +0000 (0:00:00.395) 0:00:00.675 ***** 2026-02-28 01:03:33.219075 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-28 01:03:33.219083 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-28 01:03:33.219090 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-28 01:03:33.219097 | orchestrator | 2026-02-28 01:03:33.219121 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-28 01:03:33.219129 | orchestrator | Saturday 28 February 2026 01:01:55 +0000 (0:00:01.530) 0:00:02.205 ***** 2026-02-28 01:03:33.219136 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-28 01:03:33.219143 | orchestrator | 2026-02-28 01:03:33.219150 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-28 01:03:33.219157 | orchestrator | Saturday 28 February 2026 01:01:57 +0000 (0:00:01.850) 0:00:04.055 ***** 2026-02-28 01:03:33.219165 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:33.219171 | orchestrator | 2026-02-28 01:03:33.219178 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-28 01:03:33.219185 | orchestrator | Saturday 28 February 2026 01:01:58 +0000 (0:00:00.992) 0:00:05.047 ***** 2026-02-28 01:03:33.219192 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:33.219199 | orchestrator | 2026-02-28 01:03:33.219227 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-28 01:03:33.219234 | orchestrator | Saturday 28 February 2026 01:01:59 +0000 (0:00:01.050) 0:00:06.098 ***** 2026-02-28 01:03:33.219242 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-28 01:03:33.219249 | orchestrator | ok: [testbed-manager] 2026-02-28 01:03:33.219256 | orchestrator | 2026-02-28 01:03:33.219264 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-28 01:03:33.219271 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:45.511) 0:00:51.610 ***** 2026-02-28 01:03:33.219279 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-28 01:03:33.219287 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-28 01:03:33.219294 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-28 01:03:33.219301 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-28 01:03:33.219308 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-28 01:03:33.219315 | orchestrator | 2026-02-28 01:03:33.219321 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-28 01:03:33.219328 | orchestrator | Saturday 28 February 2026 01:02:49 +0000 (0:00:04.408) 0:00:56.019 ***** 2026-02-28 01:03:33.219335 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-28 01:03:33.219341 | orchestrator | 2026-02-28 01:03:33.219347 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-28 01:03:33.219353 | orchestrator | Saturday 28 February 2026 01:02:50 +0000 (0:00:00.555) 0:00:56.574 ***** 2026-02-28 01:03:33.219359 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:03:33.219364 | orchestrator | 2026-02-28 01:03:33.219370 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-28 01:03:33.219376 | orchestrator | Saturday 28 February 2026 01:02:50 +0000 (0:00:00.204) 0:00:56.779 ***** 2026-02-28 01:03:33.219382 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:03:33.219389 | orchestrator | 2026-02-28 01:03:33.219396 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-28 01:03:33.219403 | orchestrator | Saturday 28 February 2026 01:02:51 +0000 (0:00:00.587) 0:00:57.366 ***** 2026-02-28 01:03:33.219409 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:33.219415 | orchestrator | 2026-02-28 01:03:33.219421 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-28 01:03:33.219428 | orchestrator | Saturday 28 February 2026 01:02:52 +0000 (0:00:01.594) 0:00:58.960 ***** 2026-02-28 01:03:33.219434 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:33.219439 | orchestrator | 2026-02-28 01:03:33.219445 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-28 01:03:33.219450 | orchestrator | Saturday 28 February 2026 01:02:53 +0000 (0:00:00.925) 0:00:59.886 ***** 2026-02-28 01:03:33.219456 | orchestrator | changed: [testbed-manager] 2026-02-28 01:03:33.219461 | orchestrator | 2026-02-28 01:03:33.219466 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-28 01:03:33.219473 | orchestrator | Saturday 28 February 2026 01:02:54 +0000 (0:00:00.627) 0:01:00.514 ***** 2026-02-28 01:03:33.219481 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-28 01:03:33.219487 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-28 01:03:33.219493 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-28 01:03:33.219499 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-28 01:03:33.219506 | orchestrator | 2026-02-28 01:03:33.219513 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:33.219519 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-28 01:03:33.219528 | orchestrator | 2026-02-28 01:03:33.219533 | orchestrator | 2026-02-28 01:03:33.219550 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:33.219566 | orchestrator | Saturday 28 February 2026 01:02:55 +0000 (0:00:01.703) 0:01:02.217 ***** 2026-02-28 01:03:33.219573 | orchestrator | =============================================================================== 2026-02-28 01:03:33.219579 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 45.51s 2026-02-28 01:03:33.219586 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.41s 2026-02-28 01:03:33.219593 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.85s 2026-02-28 01:03:33.219599 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.70s 2026-02-28 01:03:33.219607 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.60s 2026-02-28 01:03:33.219615 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.53s 2026-02-28 01:03:33.219622 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.05s 2026-02-28 01:03:33.219668 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-02-28 01:03:33.219678 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.93s 2026-02-28 01:03:33.219685 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2026-02-28 01:03:33.219693 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.59s 2026-02-28 01:03:33.219701 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.56s 2026-02-28 01:03:33.219709 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.40s 2026-02-28 01:03:33.219717 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.20s 2026-02-28 01:03:33.219725 | orchestrator | 2026-02-28 01:03:33.219733 | orchestrator | 2026-02-28 01:03:33.219740 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-28 01:03:33.219748 | orchestrator | 2026-02-28 01:03:33.219755 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-28 01:03:33.219763 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.146) 0:00:00.146 ***** 2026-02-28 01:03:33.219770 | orchestrator | changed: [localhost] 2026-02-28 01:03:33.219778 | orchestrator | 2026-02-28 01:03:33.219785 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-28 01:03:33.219794 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:01.268) 0:00:01.415 ***** 2026-02-28 01:03:33.219802 | orchestrator | changed: [localhost] 2026-02-28 01:03:33.219809 | orchestrator | 2026-02-28 01:03:33.219816 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-28 01:03:33.219824 | orchestrator | Saturday 28 February 2026 01:03:23 +0000 (0:00:37.113) 0:00:38.528 ***** 2026-02-28 01:03:33.219831 | orchestrator | changed: [localhost] 2026-02-28 01:03:33.219839 | orchestrator | 2026-02-28 01:03:33.219847 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:03:33.219855 | orchestrator | 2026-02-28 01:03:33.219862 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:03:33.219870 | orchestrator | Saturday 28 February 2026 01:03:29 +0000 (0:00:05.702) 0:00:44.233 ***** 2026-02-28 01:03:33.219878 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:03:33.219886 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:03:33.219894 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:03:33.219902 | orchestrator | 2026-02-28 01:03:33.219909 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:03:33.219917 | orchestrator | Saturday 28 February 2026 01:03:29 +0000 (0:00:00.499) 0:00:44.733 ***** 2026-02-28 01:03:33.219924 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-28 01:03:33.219930 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-28 01:03:33.219937 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-28 01:03:33.219943 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-28 01:03:33.219952 | orchestrator | 2026-02-28 01:03:33.219971 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-28 01:03:33.219978 | orchestrator | skipping: no hosts matched 2026-02-28 01:03:33.219986 | orchestrator | 2026-02-28 01:03:33.219994 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:03:33.220001 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:33.220010 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:33.220019 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:33.220026 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:03:33.220033 | orchestrator | 2026-02-28 01:03:33.220040 | orchestrator | 2026-02-28 01:03:33.220047 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:03:33.220054 | orchestrator | Saturday 28 February 2026 01:03:30 +0000 (0:00:00.752) 0:00:45.485 ***** 2026-02-28 01:03:33.220061 | orchestrator | =============================================================================== 2026-02-28 01:03:33.220068 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 37.11s 2026-02-28 01:03:33.220075 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.70s 2026-02-28 01:03:33.220083 | orchestrator | Ensure the destination directory exists --------------------------------- 1.27s 2026-02-28 01:03:33.220090 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-02-28 01:03:33.220106 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2026-02-28 01:03:33.221834 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:33.222650 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:33.223546 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:33.224361 | orchestrator | 2026-02-28 01:03:33 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:33.224438 | orchestrator | 2026-02-28 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:36.254081 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:36.255378 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:36.256437 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:36.257400 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:36.258548 | orchestrator | 2026-02-28 01:03:36 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:36.258581 | orchestrator | 2026-02-28 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:39.297067 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:39.297310 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:39.297348 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:39.298124 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:39.298729 | orchestrator | 2026-02-28 01:03:39 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:39.298757 | orchestrator | 2026-02-28 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:42.330404 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:42.331145 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:42.331780 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:42.332344 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:42.333100 | orchestrator | 2026-02-28 01:03:42 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:42.333132 | orchestrator | 2026-02-28 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:45.363364 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:45.364277 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:45.365504 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:45.366595 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:45.367837 | orchestrator | 2026-02-28 01:03:45 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:45.367926 | orchestrator | 2026-02-28 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:48.423121 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:48.424354 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:48.425687 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:48.427074 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:48.428463 | orchestrator | 2026-02-28 01:03:48 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:48.428491 | orchestrator | 2026-02-28 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:51.569374 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:51.570477 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:51.571458 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:51.572458 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:51.573671 | orchestrator | 2026-02-28 01:03:51 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:51.573702 | orchestrator | 2026-02-28 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:54.617524 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:54.618391 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:54.619559 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:54.620719 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:54.621944 | orchestrator | 2026-02-28 01:03:54 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:54.621986 | orchestrator | 2026-02-28 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:03:57.652759 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:03:57.653943 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:03:57.654994 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:03:57.656911 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:03:57.658132 | orchestrator | 2026-02-28 01:03:57 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:03:57.658183 | orchestrator | 2026-02-28 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:00.697138 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:00.698102 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:00.699058 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:00.700373 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:00.701815 | orchestrator | 2026-02-28 01:04:00 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:00.701990 | orchestrator | 2026-02-28 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:03.732221 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:03.732302 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:03.733199 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:03.734171 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:03.735081 | orchestrator | 2026-02-28 01:04:03 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:03.736316 | orchestrator | 2026-02-28 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:06.774273 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:06.774841 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:06.776258 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:06.778525 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:06.779559 | orchestrator | 2026-02-28 01:04:06 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:06.779612 | orchestrator | 2026-02-28 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:09.835402 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:09.836143 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:09.837185 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:09.838256 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:09.839222 | orchestrator | 2026-02-28 01:04:09 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:09.839258 | orchestrator | 2026-02-28 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:12.881362 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:12.882485 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:12.883985 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:12.885725 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:12.887703 | orchestrator | 2026-02-28 01:04:12 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:12.888097 | orchestrator | 2026-02-28 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:15.918100 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:15.918491 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:15.920233 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:15.921413 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:15.923069 | orchestrator | 2026-02-28 01:04:15 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:15.923129 | orchestrator | 2026-02-28 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:18.959307 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:18.959827 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:18.960791 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:18.961768 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:18.963345 | orchestrator | 2026-02-28 01:04:18 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:18.963380 | orchestrator | 2026-02-28 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:22.023275 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:22.023853 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:22.024812 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:22.025595 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:22.026657 | orchestrator | 2026-02-28 01:04:22 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:22.026729 | orchestrator | 2026-02-28 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:25.070287 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:25.070609 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:25.071302 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:25.074211 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:25.074979 | orchestrator | 2026-02-28 01:04:25 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:25.075012 | orchestrator | 2026-02-28 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:28.105463 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:28.105749 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:28.106481 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:28.107265 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:28.107952 | orchestrator | 2026-02-28 01:04:28 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:28.108001 | orchestrator | 2026-02-28 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:31.133751 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:31.134336 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:31.135257 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:31.136129 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state STARTED 2026-02-28 01:04:31.137026 | orchestrator | 2026-02-28 01:04:31 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:31.137204 | orchestrator | 2026-02-28 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:34.167547 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:34.167958 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:34.169107 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:34.169487 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 3aef59c6-9c15-4310-a9ab-118d26fb7553 is in state SUCCESS 2026-02-28 01:04:34.170324 | orchestrator | 2026-02-28 01:04:34 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:34.170357 | orchestrator | 2026-02-28 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:37.203501 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:37.204394 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:37.205834 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:37.207352 | orchestrator | 2026-02-28 01:04:37 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:37.207405 | orchestrator | 2026-02-28 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:40.249283 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:40.249609 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:40.281315 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:40.283549 | orchestrator | 2026-02-28 01:04:40 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:40.283602 | orchestrator | 2026-02-28 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:43.330937 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:43.331588 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:43.334098 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:43.334717 | orchestrator | 2026-02-28 01:04:43 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:43.334755 | orchestrator | 2026-02-28 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:46.372045 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:46.372758 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:46.373892 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:46.374948 | orchestrator | 2026-02-28 01:04:46 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:46.374994 | orchestrator | 2026-02-28 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:49.413353 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:49.413847 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:49.415235 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:49.416508 | orchestrator | 2026-02-28 01:04:49 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:49.416577 | orchestrator | 2026-02-28 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:52.462341 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:52.463324 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:52.464766 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:52.465531 | orchestrator | 2026-02-28 01:04:52 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:52.465606 | orchestrator | 2026-02-28 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:55.511600 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:55.513626 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:55.514854 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:55.516295 | orchestrator | 2026-02-28 01:04:55 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:55.516367 | orchestrator | 2026-02-28 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:04:58.554373 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:04:58.556357 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:04:58.557260 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:04:58.558106 | orchestrator | 2026-02-28 01:04:58 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:04:58.558215 | orchestrator | 2026-02-28 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:01.591885 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:01.592146 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:05:01.593113 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:05:01.593642 | orchestrator | 2026-02-28 01:05:01 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:01.593724 | orchestrator | 2026-02-28 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:04.639382 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:04.640289 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state STARTED 2026-02-28 01:05:04.641525 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:05:04.642679 | orchestrator | 2026-02-28 01:05:04 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:04.642755 | orchestrator | 2026-02-28 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:07.679335 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:07.682928 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 9e7dd0e0-c4d3-4c94-a8fd-4d78dcf50fb4 is in state SUCCESS 2026-02-28 01:05:07.683074 | orchestrator | 2026-02-28 01:05:07.683086 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-28 01:05:07.683092 | orchestrator | 2.16.14 2026-02-28 01:05:07.683098 | orchestrator | 2026-02-28 01:05:07.683103 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-28 01:05:07.683108 | orchestrator | 2026-02-28 01:05:07.683113 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-28 01:05:07.683118 | orchestrator | Saturday 28 February 2026 01:03:01 +0000 (0:00:00.326) 0:00:00.326 ***** 2026-02-28 01:05:07.683123 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683129 | orchestrator | 2026-02-28 01:05:07.683134 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-28 01:05:07.683139 | orchestrator | Saturday 28 February 2026 01:03:03 +0000 (0:00:02.047) 0:00:02.373 ***** 2026-02-28 01:05:07.683144 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683148 | orchestrator | 2026-02-28 01:05:07.683153 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-28 01:05:07.683180 | orchestrator | Saturday 28 February 2026 01:03:05 +0000 (0:00:01.193) 0:00:03.567 ***** 2026-02-28 01:05:07.683186 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683191 | orchestrator | 2026-02-28 01:05:07.683196 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-28 01:05:07.683201 | orchestrator | Saturday 28 February 2026 01:03:06 +0000 (0:00:01.186) 0:00:04.753 ***** 2026-02-28 01:05:07.683205 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683210 | orchestrator | 2026-02-28 01:05:07.683215 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-28 01:05:07.683220 | orchestrator | Saturday 28 February 2026 01:03:07 +0000 (0:00:01.380) 0:00:06.134 ***** 2026-02-28 01:05:07.683225 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683230 | orchestrator | 2026-02-28 01:05:07.683234 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-28 01:05:07.683239 | orchestrator | Saturday 28 February 2026 01:03:08 +0000 (0:00:01.379) 0:00:07.513 ***** 2026-02-28 01:05:07.683244 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683249 | orchestrator | 2026-02-28 01:05:07.683254 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-28 01:05:07.683259 | orchestrator | Saturday 28 February 2026 01:03:10 +0000 (0:00:01.268) 0:00:08.781 ***** 2026-02-28 01:05:07.683263 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683268 | orchestrator | 2026-02-28 01:05:07.683273 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-28 01:05:07.683278 | orchestrator | Saturday 28 February 2026 01:03:12 +0000 (0:00:02.050) 0:00:10.832 ***** 2026-02-28 01:05:07.683283 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683288 | orchestrator | 2026-02-28 01:05:07.683292 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-28 01:05:07.683300 | orchestrator | Saturday 28 February 2026 01:03:13 +0000 (0:00:01.582) 0:00:12.414 ***** 2026-02-28 01:05:07.683308 | orchestrator | changed: [testbed-manager] 2026-02-28 01:05:07.683317 | orchestrator | 2026-02-28 01:05:07.683325 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-28 01:05:07.683333 | orchestrator | Saturday 28 February 2026 01:04:06 +0000 (0:00:52.923) 0:01:05.339 ***** 2026-02-28 01:05:07.683342 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:05:07.683350 | orchestrator | 2026-02-28 01:05:07.683359 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:05:07.683367 | orchestrator | 2026-02-28 01:05:07.683376 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:05:07.683383 | orchestrator | Saturday 28 February 2026 01:04:06 +0000 (0:00:00.159) 0:01:05.498 ***** 2026-02-28 01:05:07.683391 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.683399 | orchestrator | 2026-02-28 01:05:07.683407 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:05:07.683415 | orchestrator | 2026-02-28 01:05:07.683457 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:05:07.683465 | orchestrator | Saturday 28 February 2026 01:04:08 +0000 (0:00:01.836) 0:01:07.335 ***** 2026-02-28 01:05:07.683473 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:07.683482 | orchestrator | 2026-02-28 01:05:07.683490 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-28 01:05:07.683498 | orchestrator | 2026-02-28 01:05:07.683506 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-28 01:05:07.683515 | orchestrator | Saturday 28 February 2026 01:04:20 +0000 (0:00:11.367) 0:01:18.703 ***** 2026-02-28 01:05:07.683572 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:07.683582 | orchestrator | 2026-02-28 01:05:07.683590 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:05:07.683599 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-28 01:05:07.683618 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:05:07.683627 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:05:07.683634 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:05:07.683642 | orchestrator | 2026-02-28 01:05:07.683649 | orchestrator | 2026-02-28 01:05:07.683658 | orchestrator | 2026-02-28 01:05:07.683666 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:05:07.683674 | orchestrator | Saturday 28 February 2026 01:04:31 +0000 (0:00:11.187) 0:01:29.890 ***** 2026-02-28 01:05:07.683682 | orchestrator | =============================================================================== 2026-02-28 01:05:07.683746 | orchestrator | Create admin user ------------------------------------------------------ 52.92s 2026-02-28 01:05:07.683767 | orchestrator | Restart ceph manager service ------------------------------------------- 24.39s 2026-02-28 01:05:07.683775 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.05s 2026-02-28 01:05:07.683782 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.05s 2026-02-28 01:05:07.683789 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.58s 2026-02-28 01:05:07.683797 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.38s 2026-02-28 01:05:07.683805 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.38s 2026-02-28 01:05:07.683813 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.27s 2026-02-28 01:05:07.683820 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.19s 2026-02-28 01:05:07.683835 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.19s 2026-02-28 01:05:07.683843 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-02-28 01:05:07.683851 | orchestrator | 2026-02-28 01:05:07.684390 | orchestrator | 2026-02-28 01:05:07.684418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:05:07.684427 | orchestrator | 2026-02-28 01:05:07.684436 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:05:07.684444 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.489) 0:00:00.489 ***** 2026-02-28 01:05:07.684453 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:05:07.684462 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:05:07.684470 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:05:07.684479 | orchestrator | 2026-02-28 01:05:07.684488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:05:07.684497 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:00.561) 0:00:01.051 ***** 2026-02-28 01:05:07.684505 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-28 01:05:07.684514 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-28 01:05:07.684523 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-28 01:05:07.684530 | orchestrator | 2026-02-28 01:05:07.684539 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-28 01:05:07.684547 | orchestrator | 2026-02-28 01:05:07.684554 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:05:07.684562 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:00.896) 0:00:01.947 ***** 2026-02-28 01:05:07.684571 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:05:07.684580 | orchestrator | 2026-02-28 01:05:07.684588 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-28 01:05:07.684596 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:00.794) 0:00:02.741 ***** 2026-02-28 01:05:07.684604 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-28 01:05:07.684623 | orchestrator | 2026-02-28 01:05:07.684631 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-28 01:05:07.684640 | orchestrator | Saturday 28 February 2026 01:02:52 +0000 (0:00:04.579) 0:00:07.320 ***** 2026-02-28 01:05:07.684648 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-28 01:05:07.684657 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-28 01:05:07.684737 | orchestrator | 2026-02-28 01:05:07.684747 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-28 01:05:07.684754 | orchestrator | Saturday 28 February 2026 01:02:59 +0000 (0:00:06.912) 0:00:14.233 ***** 2026-02-28 01:05:07.684762 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:05:07.684769 | orchestrator | 2026-02-28 01:05:07.684778 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-28 01:05:07.684786 | orchestrator | Saturday 28 February 2026 01:03:02 +0000 (0:00:03.465) 0:00:17.699 ***** 2026-02-28 01:05:07.684794 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-28 01:05:07.684803 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:05:07.684811 | orchestrator | 2026-02-28 01:05:07.684820 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-28 01:05:07.684828 | orchestrator | Saturday 28 February 2026 01:03:06 +0000 (0:00:04.067) 0:00:21.766 ***** 2026-02-28 01:05:07.684836 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:05:07.684867 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-28 01:05:07.684877 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-28 01:05:07.684886 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-28 01:05:07.684894 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-28 01:05:07.684902 | orchestrator | 2026-02-28 01:05:07.684910 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-28 01:05:07.684918 | orchestrator | Saturday 28 February 2026 01:03:24 +0000 (0:00:17.922) 0:00:39.689 ***** 2026-02-28 01:05:07.684927 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-28 01:05:07.684936 | orchestrator | 2026-02-28 01:05:07.684944 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-28 01:05:07.684952 | orchestrator | Saturday 28 February 2026 01:03:29 +0000 (0:00:04.343) 0:00:44.034 ***** 2026-02-28 01:05:07.684963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.684990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.685010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.685020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685095 | orchestrator | 2026-02-28 01:05:07.685105 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-28 01:05:07.685114 | orchestrator | Saturday 28 February 2026 01:03:30 +0000 (0:00:01.946) 0:00:45.981 ***** 2026-02-28 01:05:07.685124 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-28 01:05:07.685133 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-28 01:05:07.685141 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-28 01:05:07.685150 | orchestrator | 2026-02-28 01:05:07.685158 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-28 01:05:07.685168 | orchestrator | Saturday 28 February 2026 01:03:32 +0000 (0:00:01.775) 0:00:47.757 ***** 2026-02-28 01:05:07.685177 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.685187 | orchestrator | 2026-02-28 01:05:07.685196 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-28 01:05:07.685206 | orchestrator | Saturday 28 February 2026 01:03:32 +0000 (0:00:00.111) 0:00:47.869 ***** 2026-02-28 01:05:07.685215 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.685224 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:07.685233 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:07.685241 | orchestrator | 2026-02-28 01:05:07.685251 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:05:07.685260 | orchestrator | Saturday 28 February 2026 01:03:33 +0000 (0:00:00.489) 0:00:48.358 ***** 2026-02-28 01:05:07.685269 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:05:07.685278 | orchestrator | 2026-02-28 01:05:07.685287 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-28 01:05:07.685295 | orchestrator | Saturday 28 February 2026 01:03:34 +0000 (0:00:01.468) 0:00:49.827 ***** 2026-02-28 01:05:07.685303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.685328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.685338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.685346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.685416 | orchestrator | 2026-02-28 01:05:07.685424 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-28 01:05:07.685432 | orchestrator | Saturday 28 February 2026 01:03:38 +0000 (0:00:03.787) 0:00:53.614 ***** 2026-02-28 01:05:07.685441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.685449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685472 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.685486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.685495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685511 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:07.685551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.685561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685582 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:07.685590 | orchestrator | 2026-02-28 01:05:07.685597 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-28 01:05:07.685605 | orchestrator | Saturday 28 February 2026 01:03:39 +0000 (0:00:01.196) 0:00:54.811 ***** 2026-02-28 01:05:07.685623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.685633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685651 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.685661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.685675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685726 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:07.685740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.685749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.685795 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:07.685805 | orchestrator | 2026-02-28 01:05:07.685813 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-28 01:05:07.685821 | orchestrator | Saturday 28 February 2026 01:03:42 +0000 (0:00:02.482) 0:00:57.293 ***** 2026-02-28 01:05:07.685836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686219 | orchestrator | 2026-02-28 01:05:07.686227 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-28 01:05:07.686235 | orchestrator | Saturday 28 February 2026 01:03:47 +0000 (0:00:04.747) 0:01:02.041 ***** 2026-02-28 01:05:07.686244 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:07.686252 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686260 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:07.686268 | orchestrator | 2026-02-28 01:05:07.686275 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-28 01:05:07.686283 | orchestrator | Saturday 28 February 2026 01:03:51 +0000 (0:00:04.690) 0:01:06.731 ***** 2026-02-28 01:05:07.686291 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:05:07.686299 | orchestrator | 2026-02-28 01:05:07.686307 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-28 01:05:07.686314 | orchestrator | Saturday 28 February 2026 01:03:55 +0000 (0:00:03.285) 0:01:10.017 ***** 2026-02-28 01:05:07.686322 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.686330 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:07.686338 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:07.686346 | orchestrator | 2026-02-28 01:05:07.686354 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-28 01:05:07.686362 | orchestrator | Saturday 28 February 2026 01:03:55 +0000 (0:00:00.972) 0:01:10.990 ***** 2026-02-28 01:05:07.686377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686447 | orchestrator | 2026-02-28 01:05:07.686452 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-28 01:05:07.686460 | orchestrator | Saturday 28 February 2026 01:04:08 +0000 (0:00:12.905) 0:01:23.896 ***** 2026-02-28 01:05:07.686468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.686473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.686481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.686487 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.686492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.686497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.686507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.686513 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:07.686518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-28 01:05:07.686529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.686534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:05:07.686539 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:07.686544 | orchestrator | 2026-02-28 01:05:07.686551 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-28 01:05:07.686559 | orchestrator | Saturday 28 February 2026 01:04:10 +0000 (0:00:01.946) 0:01:25.842 ***** 2026-02-28 01:05:07.686568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:07.686593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:05:07.686632 | orchestrator | 2026-02-28 01:05:07.686637 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-28 01:05:07.686642 | orchestrator | Saturday 28 February 2026 01:04:15 +0000 (0:00:04.768) 0:01:30.610 ***** 2026-02-28 01:05:07.686647 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:07.686652 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:07.686657 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:07.686662 | orchestrator | 2026-02-28 01:05:07.686666 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-28 01:05:07.686671 | orchestrator | Saturday 28 February 2026 01:04:16 +0000 (0:00:00.759) 0:01:31.369 ***** 2026-02-28 01:05:07.686676 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686681 | orchestrator | 2026-02-28 01:05:07.686686 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-28 01:05:07.686707 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:02.673) 0:01:34.043 ***** 2026-02-28 01:05:07.686714 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686719 | orchestrator | 2026-02-28 01:05:07.686724 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-28 01:05:07.686729 | orchestrator | Saturday 28 February 2026 01:04:21 +0000 (0:00:02.922) 0:01:36.965 ***** 2026-02-28 01:05:07.686734 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686739 | orchestrator | 2026-02-28 01:05:07.686744 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:05:07.686749 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:11.112) 0:01:48.078 ***** 2026-02-28 01:05:07.686753 | orchestrator | 2026-02-28 01:05:07.686758 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:05:07.686763 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:00.144) 0:01:48.223 ***** 2026-02-28 01:05:07.686768 | orchestrator | 2026-02-28 01:05:07.686773 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-28 01:05:07.686778 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:00.257) 0:01:48.480 ***** 2026-02-28 01:05:07.686783 | orchestrator | 2026-02-28 01:05:07.686788 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-28 01:05:07.686792 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:00.177) 0:01:48.658 ***** 2026-02-28 01:05:07.686797 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686802 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:07.686807 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:07.686812 | orchestrator | 2026-02-28 01:05:07.686817 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-28 01:05:07.686822 | orchestrator | Saturday 28 February 2026 01:04:42 +0000 (0:00:08.417) 0:01:57.076 ***** 2026-02-28 01:05:07.686827 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686831 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:07.686837 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:07.686845 | orchestrator | 2026-02-28 01:05:07.686853 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-28 01:05:07.686861 | orchestrator | Saturday 28 February 2026 01:04:53 +0000 (0:00:11.221) 0:02:08.298 ***** 2026-02-28 01:05:07.686869 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:07.686877 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:07.686885 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:07.686894 | orchestrator | 2026-02-28 01:05:07.686902 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:05:07.686917 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:05:07.686926 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:05:07.686935 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:05:07.686943 | orchestrator | 2026-02-28 01:05:07.686951 | orchestrator | 2026-02-28 01:05:07.686958 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:05:07.686966 | orchestrator | Saturday 28 February 2026 01:05:05 +0000 (0:00:12.349) 0:02:20.647 ***** 2026-02-28 01:05:07.686976 | orchestrator | =============================================================================== 2026-02-28 01:05:07.686988 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.92s 2026-02-28 01:05:07.687004 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.91s 2026-02-28 01:05:07.687013 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.35s 2026-02-28 01:05:07.687021 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.22s 2026-02-28 01:05:07.687027 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.11s 2026-02-28 01:05:07.687034 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.42s 2026-02-28 01:05:07.687039 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.91s 2026-02-28 01:05:07.687045 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.77s 2026-02-28 01:05:07.687050 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.75s 2026-02-28 01:05:07.687056 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.69s 2026-02-28 01:05:07.687062 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.58s 2026-02-28 01:05:07.687068 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.35s 2026-02-28 01:05:07.687074 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.07s 2026-02-28 01:05:07.687080 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.79s 2026-02-28 01:05:07.687084 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2026-02-28 01:05:07.687089 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 3.29s 2026-02-28 01:05:07.687094 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.92s 2026-02-28 01:05:07.687099 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.67s 2026-02-28 01:05:07.687104 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.48s 2026-02-28 01:05:07.687108 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.95s 2026-02-28 01:05:07.687113 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:05:07.687118 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:07.687123 | orchestrator | 2026-02-28 01:05:07 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:07.687128 | orchestrator | 2026-02-28 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:10.719002 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:10.721870 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state STARTED 2026-02-28 01:05:10.723027 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:10.724286 | orchestrator | 2026-02-28 01:05:10 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:10.724403 | orchestrator | 2026-02-28 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:13.769383 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:13.769745 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 7bb4bddc-c9f6-4c25-b18f-73519905ea6f is in state STARTED 2026-02-28 01:05:13.770958 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 78fc38fd-d4d9-436f-9f17-34e393166950 is in state SUCCESS 2026-02-28 01:05:13.772216 | orchestrator | 2026-02-28 01:05:13.772250 | orchestrator | 2026-02-28 01:05:13.772331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:05:13.772340 | orchestrator | 2026-02-28 01:05:13.772347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:05:13.772354 | orchestrator | Saturday 28 February 2026 01:03:38 +0000 (0:00:00.375) 0:00:00.375 ***** 2026-02-28 01:05:13.772361 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:05:13.772369 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:05:13.772375 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:05:13.772381 | orchestrator | 2026-02-28 01:05:13.772387 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:05:13.772395 | orchestrator | Saturday 28 February 2026 01:03:38 +0000 (0:00:00.489) 0:00:00.864 ***** 2026-02-28 01:05:13.772400 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-28 01:05:13.772404 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-28 01:05:13.772408 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-28 01:05:13.772412 | orchestrator | 2026-02-28 01:05:13.772416 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-28 01:05:13.772420 | orchestrator | 2026-02-28 01:05:13.772424 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:05:13.772428 | orchestrator | Saturday 28 February 2026 01:03:39 +0000 (0:00:00.602) 0:00:01.467 ***** 2026-02-28 01:05:13.772432 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:05:13.772437 | orchestrator | 2026-02-28 01:05:13.772441 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-28 01:05:13.772457 | orchestrator | Saturday 28 February 2026 01:03:40 +0000 (0:00:01.283) 0:00:02.750 ***** 2026-02-28 01:05:13.772462 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-28 01:05:13.772466 | orchestrator | 2026-02-28 01:05:13.772470 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-28 01:05:13.772473 | orchestrator | Saturday 28 February 2026 01:03:44 +0000 (0:00:03.731) 0:00:06.482 ***** 2026-02-28 01:05:13.772477 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-28 01:05:13.772481 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-28 01:05:13.772485 | orchestrator | 2026-02-28 01:05:13.772489 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-28 01:05:13.772493 | orchestrator | Saturday 28 February 2026 01:03:52 +0000 (0:00:07.766) 0:00:14.249 ***** 2026-02-28 01:05:13.772497 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:05:13.772501 | orchestrator | 2026-02-28 01:05:13.772505 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-28 01:05:13.772509 | orchestrator | Saturday 28 February 2026 01:03:55 +0000 (0:00:03.709) 0:00:17.958 ***** 2026-02-28 01:05:13.772512 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-28 01:05:13.772516 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:05:13.772537 | orchestrator | 2026-02-28 01:05:13.772541 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-28 01:05:13.772545 | orchestrator | Saturday 28 February 2026 01:04:00 +0000 (0:00:04.475) 0:00:22.433 ***** 2026-02-28 01:05:13.772549 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:05:13.772553 | orchestrator | 2026-02-28 01:05:13.772557 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-28 01:05:13.772561 | orchestrator | Saturday 28 February 2026 01:04:04 +0000 (0:00:04.463) 0:00:26.897 ***** 2026-02-28 01:05:13.772565 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-28 01:05:13.772569 | orchestrator | 2026-02-28 01:05:13.772572 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:05:13.772576 | orchestrator | Saturday 28 February 2026 01:04:10 +0000 (0:00:05.136) 0:00:32.033 ***** 2026-02-28 01:05:13.772580 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:13.772584 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:13.772588 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:13.772592 | orchestrator | 2026-02-28 01:05:13.772596 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-28 01:05:13.772600 | orchestrator | Saturday 28 February 2026 01:04:11 +0000 (0:00:00.972) 0:00:33.006 ***** 2026-02-28 01:05:13.772607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772639 | orchestrator | 2026-02-28 01:05:13.772643 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-28 01:05:13.772647 | orchestrator | Saturday 28 February 2026 01:04:12 +0000 (0:00:01.877) 0:00:34.883 ***** 2026-02-28 01:05:13.772651 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:13.772655 | orchestrator | 2026-02-28 01:05:13.772659 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-28 01:05:13.772662 | orchestrator | Saturday 28 February 2026 01:04:13 +0000 (0:00:00.153) 0:00:35.037 ***** 2026-02-28 01:05:13.772666 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:13.772670 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:13.772674 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:13.772678 | orchestrator | 2026-02-28 01:05:13.772682 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-28 01:05:13.772685 | orchestrator | Saturday 28 February 2026 01:04:13 +0000 (0:00:00.545) 0:00:35.583 ***** 2026-02-28 01:05:13.772689 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:05:13.772734 | orchestrator | 2026-02-28 01:05:13.772743 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-28 01:05:13.772748 | orchestrator | Saturday 28 February 2026 01:04:14 +0000 (0:00:00.634) 0:00:36.218 ***** 2026-02-28 01:05:13.772755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772792 | orchestrator | 2026-02-28 01:05:13.772798 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-28 01:05:13.772805 | orchestrator | Saturday 28 February 2026 01:04:17 +0000 (0:00:03.097) 0:00:39.315 ***** 2026-02-28 01:05:13.772811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.772818 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:13.772824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.772831 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:13.772842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.772846 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:13.772850 | orchestrator | 2026-02-28 01:05:13.772854 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-28 01:05:13.772858 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:02.634) 0:00:41.950 ***** 2026-02-28 01:05:13.772865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.772874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.772879 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:13.772882 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:13.772886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.772890 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:13.772894 | orchestrator | 2026-02-28 01:05:13.772905 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-28 01:05:13.772910 | orchestrator | Saturday 28 February 2026 01:04:22 +0000 (0:00:02.190) 0:00:44.141 ***** 2026-02-28 01:05:13.772917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772941 | orchestrator | 2026-02-28 01:05:13.772945 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-28 01:05:13.772949 | orchestrator | Saturday 28 February 2026 01:04:23 +0000 (0:00:01.346) 0:00:45.487 ***** 2026-02-28 01:05:13.772953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.772975 | orchestrator | 2026-02-28 01:05:13.772979 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-28 01:05:13.772983 | orchestrator | Saturday 28 February 2026 01:04:27 +0000 (0:00:04.146) 0:00:49.633 ***** 2026-02-28 01:05:13.772988 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-28 01:05:13.772993 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-28 01:05:13.772997 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-28 01:05:13.773002 | orchestrator | 2026-02-28 01:05:13.773009 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-28 01:05:13.773014 | orchestrator | Saturday 28 February 2026 01:04:29 +0000 (0:00:02.143) 0:00:51.777 ***** 2026-02-28 01:05:13.773018 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:13.773023 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:13.773027 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:13.773032 | orchestrator | 2026-02-28 01:05:13.773036 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-28 01:05:13.773041 | orchestrator | Saturday 28 February 2026 01:04:32 +0000 (0:00:02.307) 0:00:54.084 ***** 2026-02-28 01:05:13.773109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.773119 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:05:13.773127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.773133 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:05:13.773145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-28 01:05:13.773158 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:05:13.773165 | orchestrator | 2026-02-28 01:05:13.773172 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-28 01:05:13.773176 | orchestrator | Saturday 28 February 2026 01:04:33 +0000 (0:00:01.264) 0:00:55.349 ***** 2026-02-28 01:05:13.773184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.773189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.773194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-28 01:05:13.773242 | orchestrator | 2026-02-28 01:05:13.773247 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-28 01:05:13.773253 | orchestrator | Saturday 28 February 2026 01:04:35 +0000 (0:00:02.141) 0:00:57.491 ***** 2026-02-28 01:05:13.773259 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:13.773275 | orchestrator | 2026-02-28 01:05:13.773282 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-28 01:05:13.773289 | orchestrator | Saturday 28 February 2026 01:04:38 +0000 (0:00:03.226) 0:01:00.718 ***** 2026-02-28 01:05:13.773296 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:13.773302 | orchestrator | 2026-02-28 01:05:13.773308 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-28 01:05:13.773316 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:02.412) 0:01:03.130 ***** 2026-02-28 01:05:13.773321 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:13.773325 | orchestrator | 2026-02-28 01:05:13.773330 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:05:13.773335 | orchestrator | Saturday 28 February 2026 01:04:57 +0000 (0:00:16.299) 0:01:19.430 ***** 2026-02-28 01:05:13.773339 | orchestrator | 2026-02-28 01:05:13.773344 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:05:13.773348 | orchestrator | Saturday 28 February 2026 01:04:57 +0000 (0:00:00.155) 0:01:19.585 ***** 2026-02-28 01:05:13.773353 | orchestrator | 2026-02-28 01:05:13.773361 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-28 01:05:13.773365 | orchestrator | Saturday 28 February 2026 01:04:57 +0000 (0:00:00.076) 0:01:19.662 ***** 2026-02-28 01:05:13.773369 | orchestrator | 2026-02-28 01:05:13.773373 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-28 01:05:13.773377 | orchestrator | Saturday 28 February 2026 01:04:57 +0000 (0:00:00.150) 0:01:19.813 ***** 2026-02-28 01:05:13.773381 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:05:13.773384 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:05:13.773388 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:05:13.773392 | orchestrator | 2026-02-28 01:05:13.773396 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:05:13.773401 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:05:13.773407 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:05:13.773411 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:05:13.773415 | orchestrator | 2026-02-28 01:05:13.773419 | orchestrator | 2026-02-28 01:05:13.773423 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:05:13.773427 | orchestrator | Saturday 28 February 2026 01:05:09 +0000 (0:00:11.707) 0:01:31.520 ***** 2026-02-28 01:05:13.773434 | orchestrator | =============================================================================== 2026-02-28 01:05:13.773438 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.30s 2026-02-28 01:05:13.773442 | orchestrator | placement : Restart placement-api container ---------------------------- 11.71s 2026-02-28 01:05:13.773446 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.77s 2026-02-28 01:05:13.773449 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.14s 2026-02-28 01:05:13.773453 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.48s 2026-02-28 01:05:13.773457 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.46s 2026-02-28 01:05:13.773461 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.15s 2026-02-28 01:05:13.773465 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.73s 2026-02-28 01:05:13.773469 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.71s 2026-02-28 01:05:13.773473 | orchestrator | placement : Creating placement databases -------------------------------- 3.23s 2026-02-28 01:05:13.773479 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 3.10s 2026-02-28 01:05:13.773490 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 2.63s 2026-02-28 01:05:13.773498 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.41s 2026-02-28 01:05:13.773506 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.31s 2026-02-28 01:05:13.773512 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.19s 2026-02-28 01:05:13.773518 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.14s 2026-02-28 01:05:13.773524 | orchestrator | placement : Check placement containers ---------------------------------- 2.14s 2026-02-28 01:05:13.773530 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.87s 2026-02-28 01:05:13.773536 | orchestrator | placement : Copying over config.json files for services ----------------- 1.35s 2026-02-28 01:05:13.773542 | orchestrator | placement : include_tasks ----------------------------------------------- 1.29s 2026-02-28 01:05:13.773548 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:13.773556 | orchestrator | 2026-02-28 01:05:13 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:13.773562 | orchestrator | 2026-02-28 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:16.802865 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:16.803287 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task 7bb4bddc-c9f6-4c25-b18f-73519905ea6f is in state STARTED 2026-02-28 01:05:16.804335 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:16.805454 | orchestrator | 2026-02-28 01:05:16 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:16.805600 | orchestrator | 2026-02-28 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:19.869021 | orchestrator | 2026-02-28 01:05:19 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:19.893962 | orchestrator | 2026-02-28 01:05:19 | INFO  | Task 7bb4bddc-c9f6-4c25-b18f-73519905ea6f is in state STARTED 2026-02-28 01:05:19.897372 | orchestrator | 2026-02-28 01:05:19 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:19.898884 | orchestrator | 2026-02-28 01:05:19 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:19.898948 | orchestrator | 2026-02-28 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:22.938379 | orchestrator | 2026-02-28 01:05:22 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:22.938484 | orchestrator | 2026-02-28 01:05:22 | INFO  | Task 7bb4bddc-c9f6-4c25-b18f-73519905ea6f is in state STARTED 2026-02-28 01:05:22.939851 | orchestrator | 2026-02-28 01:05:22 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:22.942343 | orchestrator | 2026-02-28 01:05:22 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:22.942376 | orchestrator | 2026-02-28 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:25.982781 | orchestrator | 2026-02-28 01:05:25 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:25.982917 | orchestrator | 2026-02-28 01:05:25 | INFO  | Task 7bb4bddc-c9f6-4c25-b18f-73519905ea6f is in state SUCCESS 2026-02-28 01:05:25.985102 | orchestrator | 2026-02-28 01:05:25 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:25.986580 | orchestrator | 2026-02-28 01:05:25 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:25.988503 | orchestrator | 2026-02-28 01:05:25 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:25.988550 | orchestrator | 2026-02-28 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:29.057622 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:29.058343 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:29.060492 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:29.064114 | orchestrator | 2026-02-28 01:05:29 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:29.064164 | orchestrator | 2026-02-28 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:32.107860 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:32.108960 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:32.110120 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:32.112955 | orchestrator | 2026-02-28 01:05:32 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:32.113013 | orchestrator | 2026-02-28 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:35.187179 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:35.188149 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:35.189372 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:35.191555 | orchestrator | 2026-02-28 01:05:35 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:35.191595 | orchestrator | 2026-02-28 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:38.223998 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:38.224805 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:38.225976 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:38.228533 | orchestrator | 2026-02-28 01:05:38 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:38.228561 | orchestrator | 2026-02-28 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:41.253952 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:41.254519 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:41.255761 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:41.256956 | orchestrator | 2026-02-28 01:05:41 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:41.257069 | orchestrator | 2026-02-28 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:44.299101 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:44.301031 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:44.302290 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:44.303746 | orchestrator | 2026-02-28 01:05:44 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:44.303801 | orchestrator | 2026-02-28 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:47.347401 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:47.351097 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:47.353622 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:47.356123 | orchestrator | 2026-02-28 01:05:47 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:47.356158 | orchestrator | 2026-02-28 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:50.412673 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:50.414199 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:50.415132 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:50.416629 | orchestrator | 2026-02-28 01:05:50 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:50.416669 | orchestrator | 2026-02-28 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:53.463290 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:53.464222 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:53.465353 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:53.466622 | orchestrator | 2026-02-28 01:05:53 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:53.466667 | orchestrator | 2026-02-28 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:56.504589 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:56.507028 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:56.507095 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:56.507927 | orchestrator | 2026-02-28 01:05:56 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:56.507967 | orchestrator | 2026-02-28 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:05:59.544440 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:05:59.546522 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:05:59.547752 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:05:59.549562 | orchestrator | 2026-02-28 01:05:59 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:05:59.549640 | orchestrator | 2026-02-28 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:02.596572 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:02.597617 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:02.598953 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:02.600514 | orchestrator | 2026-02-28 01:06:02 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:02.600570 | orchestrator | 2026-02-28 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:05.652110 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:05.653807 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:05.658235 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:05.660070 | orchestrator | 2026-02-28 01:06:05 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:05.661052 | orchestrator | 2026-02-28 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:08.706808 | orchestrator | 2026-02-28 01:06:08 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:08.707565 | orchestrator | 2026-02-28 01:06:08 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:08.708692 | orchestrator | 2026-02-28 01:06:08 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:08.710548 | orchestrator | 2026-02-28 01:06:08 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:08.710616 | orchestrator | 2026-02-28 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:11.757078 | orchestrator | 2026-02-28 01:06:11 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:11.760161 | orchestrator | 2026-02-28 01:06:11 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:11.764120 | orchestrator | 2026-02-28 01:06:11 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:11.766742 | orchestrator | 2026-02-28 01:06:11 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:11.766784 | orchestrator | 2026-02-28 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:14.807280 | orchestrator | 2026-02-28 01:06:14 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:14.808043 | orchestrator | 2026-02-28 01:06:14 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:14.809853 | orchestrator | 2026-02-28 01:06:14 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:14.811873 | orchestrator | 2026-02-28 01:06:14 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:14.811934 | orchestrator | 2026-02-28 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:17.867537 | orchestrator | 2026-02-28 01:06:17 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:17.868594 | orchestrator | 2026-02-28 01:06:17 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:17.869763 | orchestrator | 2026-02-28 01:06:17 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:17.871023 | orchestrator | 2026-02-28 01:06:17 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:17.872106 | orchestrator | 2026-02-28 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:20.985332 | orchestrator | 2026-02-28 01:06:20 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:20.985408 | orchestrator | 2026-02-28 01:06:20 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:20.985415 | orchestrator | 2026-02-28 01:06:20 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:20.985421 | orchestrator | 2026-02-28 01:06:20 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:20.985427 | orchestrator | 2026-02-28 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:23.961385 | orchestrator | 2026-02-28 01:06:23 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state STARTED 2026-02-28 01:06:23.961826 | orchestrator | 2026-02-28 01:06:23 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:23.962747 | orchestrator | 2026-02-28 01:06:23 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:23.963736 | orchestrator | 2026-02-28 01:06:23 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:23.963803 | orchestrator | 2026-02-28 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:27.015215 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task a8bab12a-e122-40d2-97dc-7f6ba62cbbae is in state SUCCESS 2026-02-28 01:06:27.017862 | orchestrator | 2026-02-28 01:06:27.017919 | orchestrator | 2026-02-28 01:06:27.018452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:06:27.018478 | orchestrator | 2026-02-28 01:06:27.018490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:06:27.018502 | orchestrator | Saturday 28 February 2026 01:05:20 +0000 (0:00:00.674) 0:00:00.674 ***** 2026-02-28 01:06:27.018513 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:06:27.018526 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:06:27.018537 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:06:27.018548 | orchestrator | 2026-02-28 01:06:27.018560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:06:27.018571 | orchestrator | Saturday 28 February 2026 01:05:21 +0000 (0:00:00.522) 0:00:01.196 ***** 2026-02-28 01:06:27.018582 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-28 01:06:27.018618 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-28 01:06:27.018631 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-28 01:06:27.018642 | orchestrator | 2026-02-28 01:06:27.018654 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-28 01:06:27.018664 | orchestrator | 2026-02-28 01:06:27.018675 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-28 01:06:27.018790 | orchestrator | Saturday 28 February 2026 01:05:22 +0000 (0:00:01.211) 0:00:02.408 ***** 2026-02-28 01:06:27.018813 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:06:27.018832 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:06:27.018843 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:06:27.018854 | orchestrator | 2026-02-28 01:06:27.018865 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:06:27.018877 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:27.018891 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:27.019018 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:06:27.019034 | orchestrator | 2026-02-28 01:06:27.019048 | orchestrator | 2026-02-28 01:06:27.019061 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:06:27.019074 | orchestrator | Saturday 28 February 2026 01:05:23 +0000 (0:00:01.093) 0:00:03.501 ***** 2026-02-28 01:06:27.019087 | orchestrator | =============================================================================== 2026-02-28 01:06:27.019100 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2026-02-28 01:06:27.019189 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.09s 2026-02-28 01:06:27.019202 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2026-02-28 01:06:27.019215 | orchestrator | 2026-02-28 01:06:27.019232 | orchestrator | 2026-02-28 01:06:27.019252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:06:27.019271 | orchestrator | 2026-02-28 01:06:27.019288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:06:27.019307 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.305) 0:00:00.305 ***** 2026-02-28 01:06:27.019325 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:06:27.019343 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:06:27.019358 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:06:27.019376 | orchestrator | 2026-02-28 01:06:27.019395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:06:27.019414 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.465) 0:00:00.771 ***** 2026-02-28 01:06:27.019435 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-28 01:06:27.019455 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-28 01:06:27.019473 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-28 01:06:27.019492 | orchestrator | 2026-02-28 01:06:27.019507 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-28 01:06:27.019518 | orchestrator | 2026-02-28 01:06:27.019529 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:27.019540 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:00.675) 0:00:01.446 ***** 2026-02-28 01:06:27.019556 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:06:27.019574 | orchestrator | 2026-02-28 01:06:27.019591 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-28 01:06:27.019604 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:00.821) 0:00:02.268 ***** 2026-02-28 01:06:27.019621 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-28 01:06:27.019637 | orchestrator | 2026-02-28 01:06:27.019648 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-28 01:06:27.019659 | orchestrator | Saturday 28 February 2026 01:02:51 +0000 (0:00:04.215) 0:00:06.484 ***** 2026-02-28 01:06:27.019670 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-28 01:06:27.019682 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-28 01:06:27.019693 | orchestrator | 2026-02-28 01:06:27.019739 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-28 01:06:27.019758 | orchestrator | Saturday 28 February 2026 01:02:58 +0000 (0:00:07.085) 0:00:13.570 ***** 2026-02-28 01:06:27.019940 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-28 01:06:27.019964 | orchestrator | 2026-02-28 01:06:27.019982 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-28 01:06:27.020002 | orchestrator | Saturday 28 February 2026 01:03:02 +0000 (0:00:03.564) 0:00:17.135 ***** 2026-02-28 01:06:27.020039 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-28 01:06:27.020069 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:06:27.020080 | orchestrator | 2026-02-28 01:06:27.020091 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-28 01:06:27.020102 | orchestrator | Saturday 28 February 2026 01:03:06 +0000 (0:00:04.481) 0:00:21.617 ***** 2026-02-28 01:06:27.020113 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:06:27.020125 | orchestrator | 2026-02-28 01:06:27.020136 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-28 01:06:27.020147 | orchestrator | Saturday 28 February 2026 01:03:10 +0000 (0:00:03.780) 0:00:25.398 ***** 2026-02-28 01:06:27.020158 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-28 01:06:27.020169 | orchestrator | 2026-02-28 01:06:27.020180 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-28 01:06:27.020191 | orchestrator | Saturday 28 February 2026 01:03:14 +0000 (0:00:04.201) 0:00:29.599 ***** 2026-02-28 01:06:27.020215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.020234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.020247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.020259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.020621 | orchestrator | 2026-02-28 01:06:27.020641 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-28 01:06:27.020659 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:03.290) 0:00:32.889 ***** 2026-02-28 01:06:27.020677 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.020762 | orchestrator | 2026-02-28 01:06:27.020786 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-28 01:06:27.020804 | orchestrator | Saturday 28 February 2026 01:03:17 +0000 (0:00:00.172) 0:00:33.061 ***** 2026-02-28 01:06:27.020822 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.020842 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:27.020860 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:27.020880 | orchestrator | 2026-02-28 01:06:27.020897 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:27.020908 | orchestrator | Saturday 28 February 2026 01:03:18 +0000 (0:00:00.355) 0:00:33.417 ***** 2026-02-28 01:06:27.020919 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:06:27.020930 | orchestrator | 2026-02-28 01:06:27.020940 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-28 01:06:27.020956 | orchestrator | Saturday 28 February 2026 01:03:19 +0000 (0:00:00.893) 0:00:34.311 ***** 2026-02-28 01:06:27.020968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.020979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.020999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.021017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.021255 | orchestrator | 2026-02-28 01:06:27.021271 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-28 01:06:27.021286 | orchestrator | Saturday 28 February 2026 01:03:26 +0000 (0:00:06.942) 0:00:41.253 ***** 2026-02-28 01:06:27.021311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.021330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.021349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021519 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.021545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.021563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.021580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021670 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:27.021753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.021776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.021800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021841 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:27.021851 | orchestrator | 2026-02-28 01:06:27.021868 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-28 01:06:27.021879 | orchestrator | Saturday 28 February 2026 01:03:28 +0000 (0:00:01.820) 0:00:43.073 ***** 2026-02-28 01:06:27.021893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.021903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.021917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021950 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:27.021964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.021977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.021991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.021999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022064 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.022079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.022093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.022108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.022141 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:27.022149 | orchestrator | 2026-02-28 01:06:27.022157 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-28 01:06:27.022166 | orchestrator | Saturday 28 February 2026 01:03:29 +0000 (0:00:01.683) 0:00:44.756 ***** 2026-02-28 01:06:27.022179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.022193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.022207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.022216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022375 | orchestrator | 2026-02-28 01:06:27.022383 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-28 01:06:27.022391 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:06.771) 0:00:51.528 ***** 2026-02-28 01:06:27.022602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.022737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.022751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.022759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.022891 | orchestrator | 2026-02-28 01:06:27.022898 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-28 01:06:27.022905 | orchestrator | Saturday 28 February 2026 01:04:03 +0000 (0:00:27.238) 0:01:18.766 ***** 2026-02-28 01:06:27.022911 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:06:27.022918 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:06:27.022924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-28 01:06:27.022929 | orchestrator | 2026-02-28 01:06:27.022935 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-28 01:06:27.022941 | orchestrator | Saturday 28 February 2026 01:04:13 +0000 (0:00:10.087) 0:01:28.853 ***** 2026-02-28 01:06:27.022997 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:06:27.023014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:06:27.023023 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-28 01:06:27.023033 | orchestrator | 2026-02-28 01:06:27.023043 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-28 01:06:27.023053 | orchestrator | Saturday 28 February 2026 01:04:19 +0000 (0:00:06.130) 0:01:34.984 ***** 2026-02-28 01:06:27.023069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023284 | orchestrator | 2026-02-28 01:06:27.023292 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-28 01:06:27.023298 | orchestrator | Saturday 28 February 2026 01:04:24 +0000 (0:00:04.286) 0:01:39.271 ***** 2026-02-28 01:06:27.023309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023533 | orchestrator | 2026-02-28 01:06:27.023543 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:27.023555 | orchestrator | Saturday 28 February 2026 01:04:28 +0000 (0:00:03.861) 0:01:43.133 ***** 2026-02-28 01:06:27.023565 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.023576 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:27.023586 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:27.023597 | orchestrator | 2026-02-28 01:06:27.023606 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-28 01:06:27.023614 | orchestrator | Saturday 28 February 2026 01:04:28 +0000 (0:00:00.895) 0:01:44.028 ***** 2026-02-28 01:06:27.023624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.023657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023738 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.023748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.023783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023825 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:27.023832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-28 01:06:27.023843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-28 01:06:27.023850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:06:27.023883 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:27.023889 | orchestrator | 2026-02-28 01:06:27.023896 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-28 01:06:27.023902 | orchestrator | Saturday 28 February 2026 01:04:29 +0000 (0:00:00.931) 0:01:44.960 ***** 2026-02-28 01:06:27.023908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.023920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.023930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-28 01:06:27.023941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.023984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:06:27.024093 | orchestrator | 2026-02-28 01:06:27.024099 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-28 01:06:27.024106 | orchestrator | Saturday 28 February 2026 01:04:35 +0000 (0:00:05.756) 0:01:50.716 ***** 2026-02-28 01:06:27.024112 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:06:27.024118 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:06:27.024124 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:06:27.024130 | orchestrator | 2026-02-28 01:06:27.024136 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-28 01:06:27.024142 | orchestrator | Saturday 28 February 2026 01:04:36 +0000 (0:00:00.378) 0:01:51.095 ***** 2026-02-28 01:06:27.024148 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-28 01:06:27.024154 | orchestrator | 2026-02-28 01:06:27.024160 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-28 01:06:27.024167 | orchestrator | Saturday 28 February 2026 01:04:38 +0000 (0:00:02.394) 0:01:53.489 ***** 2026-02-28 01:06:27.024173 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:06:27.024179 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-28 01:06:27.024185 | orchestrator | 2026-02-28 01:06:27.024191 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-28 01:06:27.024197 | orchestrator | Saturday 28 February 2026 01:04:40 +0000 (0:00:02.549) 0:01:56.038 ***** 2026-02-28 01:06:27.024203 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024209 | orchestrator | 2026-02-28 01:06:27.024215 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:06:27.024221 | orchestrator | Saturday 28 February 2026 01:04:59 +0000 (0:00:18.965) 0:02:15.004 ***** 2026-02-28 01:06:27.024227 | orchestrator | 2026-02-28 01:06:27.024233 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:06:27.024239 | orchestrator | Saturday 28 February 2026 01:05:00 +0000 (0:00:00.106) 0:02:15.111 ***** 2026-02-28 01:06:27.024245 | orchestrator | 2026-02-28 01:06:27.024251 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-28 01:06:27.024257 | orchestrator | Saturday 28 February 2026 01:05:00 +0000 (0:00:00.086) 0:02:15.198 ***** 2026-02-28 01:06:27.024263 | orchestrator | 2026-02-28 01:06:27.024269 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-28 01:06:27.024275 | orchestrator | Saturday 28 February 2026 01:05:00 +0000 (0:00:00.142) 0:02:15.340 ***** 2026-02-28 01:06:27.024280 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024293 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:27.024299 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:27.024305 | orchestrator | 2026-02-28 01:06:27.024311 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-28 01:06:27.024317 | orchestrator | Saturday 28 February 2026 01:05:14 +0000 (0:00:14.723) 0:02:30.064 ***** 2026-02-28 01:06:27.024328 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024334 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:27.024340 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:27.024346 | orchestrator | 2026-02-28 01:06:27.024352 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-28 01:06:27.024359 | orchestrator | Saturday 28 February 2026 01:05:29 +0000 (0:00:14.776) 0:02:44.840 ***** 2026-02-28 01:06:27.024365 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024371 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:27.024377 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:27.024383 | orchestrator | 2026-02-28 01:06:27.024389 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-28 01:06:27.024396 | orchestrator | Saturday 28 February 2026 01:05:43 +0000 (0:00:13.859) 0:02:58.700 ***** 2026-02-28 01:06:27.024402 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024408 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:27.024415 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:27.024420 | orchestrator | 2026-02-28 01:06:27.024426 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-28 01:06:27.024432 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:10.231) 0:03:08.931 ***** 2026-02-28 01:06:27.024438 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:27.024444 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:27.024451 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024457 | orchestrator | 2026-02-28 01:06:27.024467 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-28 01:06:27.024473 | orchestrator | Saturday 28 February 2026 01:06:04 +0000 (0:00:10.418) 0:03:19.349 ***** 2026-02-28 01:06:27.024479 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024486 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:06:27.024492 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:06:27.024499 | orchestrator | 2026-02-28 01:06:27.024505 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-28 01:06:27.024511 | orchestrator | Saturday 28 February 2026 01:06:13 +0000 (0:00:09.537) 0:03:28.887 ***** 2026-02-28 01:06:27.024517 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:06:27.024523 | orchestrator | 2026-02-28 01:06:27.024528 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:06:27.024536 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:06:27.024543 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:06:27.024550 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:06:27.024556 | orchestrator | 2026-02-28 01:06:27.024562 | orchestrator | 2026-02-28 01:06:27.024568 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:06:27.024574 | orchestrator | Saturday 28 February 2026 01:06:23 +0000 (0:00:09.556) 0:03:38.443 ***** 2026-02-28 01:06:27.024580 | orchestrator | =============================================================================== 2026-02-28 01:06:27.024586 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.24s 2026-02-28 01:06:27.024592 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.97s 2026-02-28 01:06:27.024598 | orchestrator | designate : Restart designate-api container ---------------------------- 14.78s 2026-02-28 01:06:27.024610 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.72s 2026-02-28 01:06:27.024616 | orchestrator | designate : Restart designate-central container ------------------------ 13.86s 2026-02-28 01:06:27.024622 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.42s 2026-02-28 01:06:27.024628 | orchestrator | designate : Restart designate-producer container ----------------------- 10.23s 2026-02-28 01:06:27.024634 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.09s 2026-02-28 01:06:27.024640 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 9.56s 2026-02-28 01:06:27.024646 | orchestrator | designate : Restart designate-worker container -------------------------- 9.54s 2026-02-28 01:06:27.024651 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.09s 2026-02-28 01:06:27.024657 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.94s 2026-02-28 01:06:27.024663 | orchestrator | designate : Copying over config.json files for services ----------------- 6.77s 2026-02-28 01:06:27.024669 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.13s 2026-02-28 01:06:27.024675 | orchestrator | designate : Check designate containers ---------------------------------- 5.76s 2026-02-28 01:06:27.024681 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.48s 2026-02-28 01:06:27.024687 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.29s 2026-02-28 01:06:27.024693 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.22s 2026-02-28 01:06:27.024718 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.20s 2026-02-28 01:06:27.024728 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.86s 2026-02-28 01:06:27.024787 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:27.024796 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:27.024802 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:27.024808 | orchestrator | 2026-02-28 01:06:27 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:27.024814 | orchestrator | 2026-02-28 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:30.073394 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:30.073489 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:30.073499 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:30.074235 | orchestrator | 2026-02-28 01:06:30 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:30.074258 | orchestrator | 2026-02-28 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:33.126226 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:33.126901 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:33.127901 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:33.128999 | orchestrator | 2026-02-28 01:06:33 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:33.129027 | orchestrator | 2026-02-28 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:36.169370 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:36.174196 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:36.175389 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:36.176414 | orchestrator | 2026-02-28 01:06:36 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:36.176465 | orchestrator | 2026-02-28 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:39.209027 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:39.210331 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:39.212021 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:39.213161 | orchestrator | 2026-02-28 01:06:39 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:39.213330 | orchestrator | 2026-02-28 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:42.252806 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:42.253199 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:42.254225 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:42.255240 | orchestrator | 2026-02-28 01:06:42 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:42.255274 | orchestrator | 2026-02-28 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:45.313157 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:45.314499 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:45.316973 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:45.319271 | orchestrator | 2026-02-28 01:06:45 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:45.319313 | orchestrator | 2026-02-28 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:48.375515 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:48.376951 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:48.380279 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:48.381411 | orchestrator | 2026-02-28 01:06:48 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:48.381447 | orchestrator | 2026-02-28 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:51.849226 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:51.849316 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:51.849326 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:51.849333 | orchestrator | 2026-02-28 01:06:51 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:51.849362 | orchestrator | 2026-02-28 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:54.847649 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:54.848719 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:54.850177 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:54.851607 | orchestrator | 2026-02-28 01:06:54 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:54.851649 | orchestrator | 2026-02-28 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:06:57.894092 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:06:57.894988 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:06:57.896003 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:06:57.897098 | orchestrator | 2026-02-28 01:06:57 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:06:57.897174 | orchestrator | 2026-02-28 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:00.944578 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:00.945402 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:00.947281 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:00.950309 | orchestrator | 2026-02-28 01:07:00 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:00.950409 | orchestrator | 2026-02-28 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:03.991302 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:03.991950 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:03.995281 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:03.996559 | orchestrator | 2026-02-28 01:07:03 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:03.996692 | orchestrator | 2026-02-28 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:07.050964 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:07.051628 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:07.052134 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:07.053320 | orchestrator | 2026-02-28 01:07:07 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:07.053370 | orchestrator | 2026-02-28 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:10.102452 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:10.103422 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:10.104420 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:10.105679 | orchestrator | 2026-02-28 01:07:10 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:10.105847 | orchestrator | 2026-02-28 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:13.149324 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:13.150893 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:13.153172 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:13.155926 | orchestrator | 2026-02-28 01:07:13 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:13.157327 | orchestrator | 2026-02-28 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:16.207415 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:16.209226 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:16.210788 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:16.212643 | orchestrator | 2026-02-28 01:07:16 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:16.212732 | orchestrator | 2026-02-28 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:19.255893 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state STARTED 2026-02-28 01:07:19.257208 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:19.259820 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:19.261422 | orchestrator | 2026-02-28 01:07:19 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:19.261458 | orchestrator | 2026-02-28 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:22.358459 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 880afd3b-6ad9-4137-9c29-aac31933f1fe is in state SUCCESS 2026-02-28 01:07:22.358553 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:22.358567 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:22.358578 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:22.358588 | orchestrator | 2026-02-28 01:07:22 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:22.358598 | orchestrator | 2026-02-28 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:25.389928 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:25.397807 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:25.400118 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:25.401140 | orchestrator | 2026-02-28 01:07:25 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:25.401174 | orchestrator | 2026-02-28 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:28.445269 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:28.462422 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:28.462519 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:28.462533 | orchestrator | 2026-02-28 01:07:28 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:28.462545 | orchestrator | 2026-02-28 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:31.488776 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:31.489417 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:31.490195 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:31.491390 | orchestrator | 2026-02-28 01:07:31 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:31.491437 | orchestrator | 2026-02-28 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:34.533347 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:34.534257 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:34.535464 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:34.536280 | orchestrator | 2026-02-28 01:07:34 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:34.536844 | orchestrator | 2026-02-28 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:37.584301 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:37.587198 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:37.590819 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:37.593082 | orchestrator | 2026-02-28 01:07:37 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:37.594468 | orchestrator | 2026-02-28 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:40.628539 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:40.629421 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:40.629818 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:40.630541 | orchestrator | 2026-02-28 01:07:40 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:40.630603 | orchestrator | 2026-02-28 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:43.662115 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:43.663105 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:43.664445 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:43.666532 | orchestrator | 2026-02-28 01:07:43 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state STARTED 2026-02-28 01:07:43.666575 | orchestrator | 2026-02-28 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:46.735655 | orchestrator | 2026-02-28 01:07:46.735744 | orchestrator | 2026-02-28 01:07:46.735754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:07:46.735762 | orchestrator | 2026-02-28 01:07:46.735768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:07:46.735775 | orchestrator | Saturday 28 February 2026 01:06:40 +0000 (0:00:00.341) 0:00:00.341 ***** 2026-02-28 01:07:46.735780 | orchestrator | ok: [testbed-manager] 2026-02-28 01:07:46.735785 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:46.735789 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:46.735793 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:46.735796 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:07:46.735800 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:07:46.735804 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:07:46.735808 | orchestrator | 2026-02-28 01:07:46.735812 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:07:46.735815 | orchestrator | Saturday 28 February 2026 01:06:42 +0000 (0:00:01.015) 0:00:01.357 ***** 2026-02-28 01:07:46.735819 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735823 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735827 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735832 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735835 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735839 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735843 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-28 01:07:46.735847 | orchestrator | 2026-02-28 01:07:46.735850 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-28 01:07:46.735854 | orchestrator | 2026-02-28 01:07:46.735858 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-28 01:07:46.735862 | orchestrator | Saturday 28 February 2026 01:06:42 +0000 (0:00:00.865) 0:00:02.222 ***** 2026-02-28 01:07:46.735866 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:07:46.735871 | orchestrator | 2026-02-28 01:07:46.735874 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-28 01:07:46.735878 | orchestrator | Saturday 28 February 2026 01:06:44 +0000 (0:00:01.875) 0:00:04.098 ***** 2026-02-28 01:07:46.735882 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-28 01:07:46.735886 | orchestrator | 2026-02-28 01:07:46.735890 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-28 01:07:46.735893 | orchestrator | Saturday 28 February 2026 01:06:49 +0000 (0:00:04.458) 0:00:08.557 ***** 2026-02-28 01:07:46.735897 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-28 01:07:46.735902 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-28 01:07:46.735906 | orchestrator | 2026-02-28 01:07:46.735910 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-28 01:07:46.735914 | orchestrator | Saturday 28 February 2026 01:06:58 +0000 (0:00:08.808) 0:00:17.366 ***** 2026-02-28 01:07:46.735918 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-28 01:07:46.735921 | orchestrator | 2026-02-28 01:07:46.735925 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-28 01:07:46.735937 | orchestrator | Saturday 28 February 2026 01:07:02 +0000 (0:00:04.192) 0:00:21.558 ***** 2026-02-28 01:07:46.735953 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-28 01:07:46.735957 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:07:46.735961 | orchestrator | 2026-02-28 01:07:46.735965 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-28 01:07:46.735968 | orchestrator | Saturday 28 February 2026 01:07:06 +0000 (0:00:04.441) 0:00:26.000 ***** 2026-02-28 01:07:46.735972 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-28 01:07:46.735976 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-28 01:07:46.735980 | orchestrator | 2026-02-28 01:07:46.735984 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-28 01:07:46.735987 | orchestrator | Saturday 28 February 2026 01:07:14 +0000 (0:00:07.720) 0:00:33.720 ***** 2026-02-28 01:07:46.735991 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-28 01:07:46.735995 | orchestrator | 2026-02-28 01:07:46.735999 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:07:46.736003 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736007 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736011 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736015 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736018 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736037 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736042 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:07:46.736046 | orchestrator | 2026-02-28 01:07:46.736054 | orchestrator | 2026-02-28 01:07:46.736058 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:07:46.736062 | orchestrator | Saturday 28 February 2026 01:07:19 +0000 (0:00:05.603) 0:00:39.324 ***** 2026-02-28 01:07:46.736065 | orchestrator | =============================================================================== 2026-02-28 01:07:46.736069 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.81s 2026-02-28 01:07:46.736073 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.72s 2026-02-28 01:07:46.736077 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.60s 2026-02-28 01:07:46.736081 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.46s 2026-02-28 01:07:46.736084 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.44s 2026-02-28 01:07:46.736088 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.19s 2026-02-28 01:07:46.736092 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.88s 2026-02-28 01:07:46.736096 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2026-02-28 01:07:46.736099 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-02-28 01:07:46.736103 | orchestrator | 2026-02-28 01:07:46.736107 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:46.736111 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:46.736115 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:46.736122 | orchestrator | 2026-02-28 01:07:46 | INFO  | Task 019147cd-6327-4726-9b3b-bb353e6604b9 is in state SUCCESS 2026-02-28 01:07:46.736492 | orchestrator | 2026-02-28 01:07:46.736506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:07:46.736513 | orchestrator | 2026-02-28 01:07:46.736519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:07:46.736526 | orchestrator | Saturday 28 February 2026 01:05:13 +0000 (0:00:00.441) 0:00:00.441 ***** 2026-02-28 01:07:46.736532 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:46.736537 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:46.736541 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:46.736546 | orchestrator | 2026-02-28 01:07:46.736550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:07:46.736554 | orchestrator | Saturday 28 February 2026 01:05:14 +0000 (0:00:00.799) 0:00:01.240 ***** 2026-02-28 01:07:46.736559 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-28 01:07:46.736564 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-28 01:07:46.736568 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-28 01:07:46.736572 | orchestrator | 2026-02-28 01:07:46.736577 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-28 01:07:46.736581 | orchestrator | 2026-02-28 01:07:46.736586 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:07:46.736594 | orchestrator | Saturday 28 February 2026 01:05:15 +0000 (0:00:01.159) 0:00:02.400 ***** 2026-02-28 01:07:46.736599 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:46.736603 | orchestrator | 2026-02-28 01:07:46.736607 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-28 01:07:46.736610 | orchestrator | Saturday 28 February 2026 01:05:16 +0000 (0:00:01.094) 0:00:03.494 ***** 2026-02-28 01:07:46.736614 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-28 01:07:46.736618 | orchestrator | 2026-02-28 01:07:46.736622 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-28 01:07:46.736626 | orchestrator | Saturday 28 February 2026 01:05:20 +0000 (0:00:04.065) 0:00:07.560 ***** 2026-02-28 01:07:46.736629 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-28 01:07:46.736633 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-28 01:07:46.736637 | orchestrator | 2026-02-28 01:07:46.736641 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-28 01:07:46.736645 | orchestrator | Saturday 28 February 2026 01:05:28 +0000 (0:00:07.150) 0:00:14.710 ***** 2026-02-28 01:07:46.736650 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:07:46.736656 | orchestrator | 2026-02-28 01:07:46.736663 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-28 01:07:46.736670 | orchestrator | Saturday 28 February 2026 01:05:31 +0000 (0:00:03.796) 0:00:18.507 ***** 2026-02-28 01:07:46.736677 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-28 01:07:46.736684 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:07:46.736690 | orchestrator | 2026-02-28 01:07:46.736722 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-28 01:07:46.736728 | orchestrator | Saturday 28 February 2026 01:05:36 +0000 (0:00:04.436) 0:00:22.944 ***** 2026-02-28 01:07:46.736735 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:07:46.736742 | orchestrator | 2026-02-28 01:07:46.736748 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-28 01:07:46.736754 | orchestrator | Saturday 28 February 2026 01:05:40 +0000 (0:00:03.809) 0:00:26.753 ***** 2026-02-28 01:07:46.736767 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-28 01:07:46.736774 | orchestrator | 2026-02-28 01:07:46.736780 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-28 01:07:46.736785 | orchestrator | Saturday 28 February 2026 01:05:45 +0000 (0:00:04.986) 0:00:31.740 ***** 2026-02-28 01:07:46.736794 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.736802 | orchestrator | 2026-02-28 01:07:46.736808 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-28 01:07:46.736814 | orchestrator | Saturday 28 February 2026 01:05:49 +0000 (0:00:04.074) 0:00:35.815 ***** 2026-02-28 01:07:46.736820 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.736827 | orchestrator | 2026-02-28 01:07:46.736833 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-28 01:07:46.736839 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:04.681) 0:00:40.496 ***** 2026-02-28 01:07:46.736845 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.736851 | orchestrator | 2026-02-28 01:07:46.736857 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-28 01:07:46.736863 | orchestrator | Saturday 28 February 2026 01:05:58 +0000 (0:00:04.585) 0:00:45.082 ***** 2026-02-28 01:07:46.736879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.736914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.736929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.736936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.736948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.736999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737009 | orchestrator | 2026-02-28 01:07:46.737016 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-28 01:07:46.737021 | orchestrator | Saturday 28 February 2026 01:06:00 +0000 (0:00:02.139) 0:00:47.221 ***** 2026-02-28 01:07:46.737027 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:46.737034 | orchestrator | 2026-02-28 01:07:46.737040 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-28 01:07:46.737046 | orchestrator | Saturday 28 February 2026 01:06:00 +0000 (0:00:00.237) 0:00:47.459 ***** 2026-02-28 01:07:46.737052 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:46.737059 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:46.737065 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:46.737072 | orchestrator | 2026-02-28 01:07:46.737079 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-28 01:07:46.737086 | orchestrator | Saturday 28 February 2026 01:06:02 +0000 (0:00:01.228) 0:00:48.687 ***** 2026-02-28 01:07:46.737092 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:07:46.737097 | orchestrator | 2026-02-28 01:07:46.737104 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-28 01:07:46.737114 | orchestrator | Saturday 28 February 2026 01:06:03 +0000 (0:00:01.153) 0:00:49.840 ***** 2026-02-28 01:07:46.737122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737188 | orchestrator | 2026-02-28 01:07:46.737194 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-28 01:07:46.737198 | orchestrator | Saturday 28 February 2026 01:06:07 +0000 (0:00:04.131) 0:00:53.971 ***** 2026-02-28 01:07:46.737202 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:07:46.737206 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:07:46.737210 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:07:46.737214 | orchestrator | 2026-02-28 01:07:46.737218 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:07:46.737221 | orchestrator | Saturday 28 February 2026 01:06:08 +0000 (0:00:00.796) 0:00:54.768 ***** 2026-02-28 01:07:46.737225 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:07:46.737229 | orchestrator | 2026-02-28 01:07:46.737233 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-28 01:07:46.737237 | orchestrator | Saturday 28 February 2026 01:06:09 +0000 (0:00:01.791) 0:00:56.559 ***** 2026-02-28 01:07:46.737241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737273 | orchestrator | 2026-02-28 01:07:46.737277 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-28 01:07:46.737281 | orchestrator | Saturday 28 February 2026 01:06:12 +0000 (0:00:02.905) 0:00:59.464 ***** 2026-02-28 01:07:46.737288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737301 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:46.737305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737314 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:46.737318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737329 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:46.737332 | orchestrator | 2026-02-28 01:07:46.737336 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-28 01:07:46.737342 | orchestrator | Saturday 28 February 2026 01:06:13 +0000 (0:00:00.719) 0:01:00.184 ***** 2026-02-28 01:07:46.737348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737356 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:46.737360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737368 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:46.737376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737398 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:46.737402 | orchestrator | 2026-02-28 01:07:46.737406 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-28 01:07:46.737409 | orchestrator | Saturday 28 February 2026 01:06:17 +0000 (0:00:03.980) 0:01:04.164 ***** 2026-02-28 01:07:46.737413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'exte2026-02-28 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:46.737523 | orchestrator | rnal': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737544 | orchestrator | 2026-02-28 01:07:46.737548 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-28 01:07:46.737552 | orchestrator | Saturday 28 February 2026 01:06:22 +0000 (0:00:04.472) 0:01:08.637 ***** 2026-02-28 01:07:46.737555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737587 | orchestrator | 2026-02-28 01:07:46.737591 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-28 01:07:46.737595 | orchestrator | Saturday 28 February 2026 01:06:37 +0000 (0:00:15.594) 0:01:24.231 ***** 2026-02-28 01:07:46.737605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737616 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:46.737620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-28 01:07:46.737628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:07:46.737641 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:46.737645 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:46.737649 | orchestrator | 2026-02-28 01:07:46.737653 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-28 01:07:46.737657 | orchestrator | Saturday 28 February 2026 01:06:39 +0000 (0:00:01.666) 0:01:25.898 ***** 2026-02-28 01:07:46.737663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-28 01:07:46.737677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:07:46.737707 | orchestrator | 2026-02-28 01:07:46.737714 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-28 01:07:46.737720 | orchestrator | Saturday 28 February 2026 01:06:42 +0000 (0:00:03.326) 0:01:29.225 ***** 2026-02-28 01:07:46.737727 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:07:46.737734 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:07:46.737740 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:07:46.737745 | orchestrator | 2026-02-28 01:07:46.737749 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-28 01:07:46.737753 | orchestrator | Saturday 28 February 2026 01:06:42 +0000 (0:00:00.346) 0:01:29.572 ***** 2026-02-28 01:07:46.737757 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.737760 | orchestrator | 2026-02-28 01:07:46.737764 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-28 01:07:46.737768 | orchestrator | Saturday 28 February 2026 01:06:45 +0000 (0:00:02.377) 0:01:31.950 ***** 2026-02-28 01:07:46.737772 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.737775 | orchestrator | 2026-02-28 01:07:46.737779 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-28 01:07:46.737783 | orchestrator | Saturday 28 February 2026 01:06:47 +0000 (0:00:02.569) 0:01:34.519 ***** 2026-02-28 01:07:46.737787 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.737791 | orchestrator | 2026-02-28 01:07:46.737794 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:07:46.737798 | orchestrator | Saturday 28 February 2026 01:07:06 +0000 (0:00:19.033) 0:01:53.552 ***** 2026-02-28 01:07:46.737807 | orchestrator | 2026-02-28 01:07:46.737811 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:07:46.737815 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:00.098) 0:01:53.651 ***** 2026-02-28 01:07:46.737819 | orchestrator | 2026-02-28 01:07:46.737822 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-28 01:07:46.737826 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:00.081) 0:01:53.732 ***** 2026-02-28 01:07:46.737830 | orchestrator | 2026-02-28 01:07:46.737834 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-28 01:07:46.737838 | orchestrator | Saturday 28 February 2026 01:07:07 +0000 (0:00:00.082) 0:01:53.814 ***** 2026-02-28 01:07:46.737841 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.737845 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:46.737849 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:46.737853 | orchestrator | 2026-02-28 01:07:46.737860 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-28 01:07:46.737864 | orchestrator | Saturday 28 February 2026 01:07:26 +0000 (0:00:19.698) 0:02:13.512 ***** 2026-02-28 01:07:46.737868 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:07:46.737872 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:07:46.737876 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:07:46.737881 | orchestrator | 2026-02-28 01:07:46.737887 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:07:46.737894 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-28 01:07:46.737901 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:07:46.737908 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:07:46.737912 | orchestrator | 2026-02-28 01:07:46.737916 | orchestrator | 2026-02-28 01:07:46.737920 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:07:46.737924 | orchestrator | Saturday 28 February 2026 01:07:45 +0000 (0:00:18.793) 0:02:32.306 ***** 2026-02-28 01:07:46.737927 | orchestrator | =============================================================================== 2026-02-28 01:07:46.737931 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.70s 2026-02-28 01:07:46.737938 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.03s 2026-02-28 01:07:46.737942 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 18.79s 2026-02-28 01:07:46.737946 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 15.59s 2026-02-28 01:07:46.737950 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.15s 2026-02-28 01:07:46.737954 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.99s 2026-02-28 01:07:46.737957 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.68s 2026-02-28 01:07:46.737961 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.59s 2026-02-28 01:07:46.737965 | orchestrator | magnum : Copying over config.json files for services -------------------- 4.47s 2026-02-28 01:07:46.737969 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.44s 2026-02-28 01:07:46.737973 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 4.13s 2026-02-28 01:07:46.737979 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.07s 2026-02-28 01:07:46.737983 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.07s 2026-02-28 01:07:46.737987 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.98s 2026-02-28 01:07:46.737991 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.81s 2026-02-28 01:07:46.738003 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.80s 2026-02-28 01:07:46.738009 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.33s 2026-02-28 01:07:46.738062 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.91s 2026-02-28 01:07:46.738066 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.57s 2026-02-28 01:07:46.738070 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.38s 2026-02-28 01:07:49.754450 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:07:49.755478 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:49.757124 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:49.758191 | orchestrator | 2026-02-28 01:07:49 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:49.758236 | orchestrator | 2026-02-28 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:52.793763 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:07:52.795546 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:52.797096 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:52.798595 | orchestrator | 2026-02-28 01:07:52 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:52.798641 | orchestrator | 2026-02-28 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:55.845201 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:07:55.847351 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:55.849765 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:55.851999 | orchestrator | 2026-02-28 01:07:55 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:55.852047 | orchestrator | 2026-02-28 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:07:58.892881 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:07:58.893777 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:07:58.895449 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:07:58.896251 | orchestrator | 2026-02-28 01:07:58 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:07:58.896449 | orchestrator | 2026-02-28 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:01.931257 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:01.932042 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:01.933550 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:01.935003 | orchestrator | 2026-02-28 01:08:01 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:01.935282 | orchestrator | 2026-02-28 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:04.968198 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:04.968484 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:04.969393 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:04.970993 | orchestrator | 2026-02-28 01:08:04 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:04.971056 | orchestrator | 2026-02-28 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:08.014210 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:08.017219 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:08.017303 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:08.018478 | orchestrator | 2026-02-28 01:08:08 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:08.018513 | orchestrator | 2026-02-28 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:11.070195 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:11.070977 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:11.072067 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:11.073684 | orchestrator | 2026-02-28 01:08:11 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:11.073775 | orchestrator | 2026-02-28 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:14.103182 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:14.104858 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:14.106219 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:14.108449 | orchestrator | 2026-02-28 01:08:14 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:14.110216 | orchestrator | 2026-02-28 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:17.148443 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:17.148955 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:17.150595 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:17.151509 | orchestrator | 2026-02-28 01:08:17 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:17.151605 | orchestrator | 2026-02-28 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:20.213467 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:20.213561 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:20.213576 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:20.213616 | orchestrator | 2026-02-28 01:08:20 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state STARTED 2026-02-28 01:08:20.213627 | orchestrator | 2026-02-28 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:23.278911 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:23.278995 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:23.279010 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:23.280951 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:23.283507 | orchestrator | 2026-02-28 01:08:23.283548 | orchestrator | 2026-02-28 01:08:23 | INFO  | Task 34b2796b-2316-4441-9c8c-9421c9d47620 is in state SUCCESS 2026-02-28 01:08:23.285239 | orchestrator | 2026-02-28 01:08:23.285352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:08:23.285400 | orchestrator | 2026-02-28 01:08:23.285411 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:08:23.285421 | orchestrator | Saturday 28 February 2026 01:02:45 +0000 (0:00:00.463) 0:00:00.463 ***** 2026-02-28 01:08:23.285431 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:23.285442 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:23.285452 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:23.285461 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:23.285471 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:23.285481 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:23.285491 | orchestrator | 2026-02-28 01:08:23.285501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:08:23.285525 | orchestrator | Saturday 28 February 2026 01:02:46 +0000 (0:00:01.095) 0:00:01.559 ***** 2026-02-28 01:08:23.285543 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-28 01:08:23.285561 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-28 01:08:23.285577 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-28 01:08:23.285594 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-28 01:08:23.285610 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-28 01:08:23.285625 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-28 01:08:23.285640 | orchestrator | 2026-02-28 01:08:23.285655 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-28 01:08:23.285668 | orchestrator | 2026-02-28 01:08:23.285684 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:23.285719 | orchestrator | Saturday 28 February 2026 01:02:47 +0000 (0:00:01.069) 0:00:02.628 ***** 2026-02-28 01:08:23.285738 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:08:23.285756 | orchestrator | 2026-02-28 01:08:23.285774 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-28 01:08:23.285792 | orchestrator | Saturday 28 February 2026 01:02:49 +0000 (0:00:01.598) 0:00:04.227 ***** 2026-02-28 01:08:23.285809 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:23.285827 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:23.285845 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:23.285863 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:23.285880 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:23.285896 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:23.285910 | orchestrator | 2026-02-28 01:08:23.285925 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-28 01:08:23.285939 | orchestrator | Saturday 28 February 2026 01:02:50 +0000 (0:00:01.585) 0:00:05.813 ***** 2026-02-28 01:08:23.285979 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:23.285994 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:23.286011 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:23.286091 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:23.286110 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:23.286126 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:23.286139 | orchestrator | 2026-02-28 01:08:23.286150 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-28 01:08:23.286159 | orchestrator | Saturday 28 February 2026 01:02:52 +0000 (0:00:01.311) 0:00:07.124 ***** 2026-02-28 01:08:23.286169 | orchestrator | ok: [testbed-node-0] => { 2026-02-28 01:08:23.286179 | orchestrator |  "changed": false, 2026-02-28 01:08:23.286189 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:23.286199 | orchestrator | } 2026-02-28 01:08:23.286209 | orchestrator | ok: [testbed-node-1] => { 2026-02-28 01:08:23.286219 | orchestrator |  "changed": false, 2026-02-28 01:08:23.286229 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:23.286239 | orchestrator | } 2026-02-28 01:08:23.286249 | orchestrator | ok: [testbed-node-2] => { 2026-02-28 01:08:23.286258 | orchestrator |  "changed": false, 2026-02-28 01:08:23.286268 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:23.286278 | orchestrator | } 2026-02-28 01:08:23.286287 | orchestrator | ok: [testbed-node-3] => { 2026-02-28 01:08:23.286297 | orchestrator |  "changed": false, 2026-02-28 01:08:23.286307 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:23.286316 | orchestrator | } 2026-02-28 01:08:23.286326 | orchestrator | ok: [testbed-node-4] => { 2026-02-28 01:08:23.286335 | orchestrator |  "changed": false, 2026-02-28 01:08:23.286345 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:23.286355 | orchestrator | } 2026-02-28 01:08:23.286364 | orchestrator | ok: [testbed-node-5] => { 2026-02-28 01:08:23.286374 | orchestrator |  "changed": false, 2026-02-28 01:08:23.286384 | orchestrator |  "msg": "All assertions passed" 2026-02-28 01:08:23.286393 | orchestrator | } 2026-02-28 01:08:23.286403 | orchestrator | 2026-02-28 01:08:23.286413 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-28 01:08:23.286422 | orchestrator | Saturday 28 February 2026 01:02:53 +0000 (0:00:01.073) 0:00:08.198 ***** 2026-02-28 01:08:23.286432 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.286442 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.286451 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.286461 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.286471 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.286480 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.286490 | orchestrator | 2026-02-28 01:08:23.286500 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-28 01:08:23.286510 | orchestrator | Saturday 28 February 2026 01:02:53 +0000 (0:00:00.755) 0:00:08.953 ***** 2026-02-28 01:08:23.286519 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-28 01:08:23.286529 | orchestrator | 2026-02-28 01:08:23.286539 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-28 01:08:23.286549 | orchestrator | Saturday 28 February 2026 01:02:57 +0000 (0:00:03.686) 0:00:12.640 ***** 2026-02-28 01:08:23.286559 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-28 01:08:23.286569 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-28 01:08:23.286579 | orchestrator | 2026-02-28 01:08:23.286611 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-28 01:08:23.286621 | orchestrator | Saturday 28 February 2026 01:03:04 +0000 (0:00:06.761) 0:00:19.401 ***** 2026-02-28 01:08:23.286631 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:08:23.286641 | orchestrator | 2026-02-28 01:08:23.286650 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-28 01:08:23.286670 | orchestrator | Saturday 28 February 2026 01:03:07 +0000 (0:00:03.479) 0:00:22.881 ***** 2026-02-28 01:08:23.286680 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-28 01:08:23.286711 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:08:23.286722 | orchestrator | 2026-02-28 01:08:23.286732 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-28 01:08:23.286741 | orchestrator | Saturday 28 February 2026 01:03:12 +0000 (0:00:04.505) 0:00:27.386 ***** 2026-02-28 01:08:23.286760 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:08:23.286778 | orchestrator | 2026-02-28 01:08:23.286794 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-28 01:08:23.286811 | orchestrator | Saturday 28 February 2026 01:03:16 +0000 (0:00:03.790) 0:00:31.177 ***** 2026-02-28 01:08:23.286827 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-28 01:08:23.286875 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-28 01:08:23.286896 | orchestrator | 2026-02-28 01:08:23.286914 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:23.286931 | orchestrator | Saturday 28 February 2026 01:03:24 +0000 (0:00:08.809) 0:00:39.986 ***** 2026-02-28 01:08:23.286948 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.286958 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.286968 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.286977 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.286987 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.286997 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.287006 | orchestrator | 2026-02-28 01:08:23.287016 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-28 01:08:23.287026 | orchestrator | Saturday 28 February 2026 01:03:25 +0000 (0:00:00.939) 0:00:40.925 ***** 2026-02-28 01:08:23.287036 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.287046 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.287055 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.287065 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.287075 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.287085 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.287094 | orchestrator | 2026-02-28 01:08:23.287104 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-28 01:08:23.287114 | orchestrator | Saturday 28 February 2026 01:03:28 +0000 (0:00:02.562) 0:00:43.487 ***** 2026-02-28 01:08:23.287124 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:08:23.287134 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:08:23.287143 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:08:23.287153 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:08:23.287163 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:08:23.287172 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:08:23.287182 | orchestrator | 2026-02-28 01:08:23.287192 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-28 01:08:23.287202 | orchestrator | Saturday 28 February 2026 01:03:30 +0000 (0:00:01.582) 0:00:45.069 ***** 2026-02-28 01:08:23.287212 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.287222 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.287232 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.287241 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.287251 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.287261 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.287271 | orchestrator | 2026-02-28 01:08:23.287281 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-28 01:08:23.287290 | orchestrator | Saturday 28 February 2026 01:03:33 +0000 (0:00:03.012) 0:00:48.082 ***** 2026-02-28 01:08:23.287303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.287338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.287356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.287367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.287377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.287394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.287405 | orchestrator | 2026-02-28 01:08:23.287415 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-28 01:08:23.287425 | orchestrator | Saturday 28 February 2026 01:03:36 +0000 (0:00:03.779) 0:00:51.861 ***** 2026-02-28 01:08:23.287435 | orchestrator | [WARNING]: Skipped 2026-02-28 01:08:23.287447 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-28 01:08:23.287457 | orchestrator | due to this access issue: 2026-02-28 01:08:23.287467 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-28 01:08:23.287477 | orchestrator | a directory 2026-02-28 01:08:23.287487 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:08:23.287497 | orchestrator | 2026-02-28 01:08:23.287508 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:23.287523 | orchestrator | Saturday 28 February 2026 01:03:37 +0000 (0:00:01.096) 0:00:52.958 ***** 2026-02-28 01:08:23.287534 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:08:23.287545 | orchestrator | 2026-02-28 01:08:23.287555 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-28 01:08:23.287564 | orchestrator | Saturday 28 February 2026 01:03:39 +0000 (0:00:01.786) 0:00:54.744 ***** 2026-02-28 01:08:23.287587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.287598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.287609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.287625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.287642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.287657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.287667 | orchestrator | 2026-02-28 01:08:23.287677 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-28 01:08:23.287753 | orchestrator | Saturday 28 February 2026 01:03:44 +0000 (0:00:04.905) 0:00:59.650 ***** 2026-02-28 01:08:23.287776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.287807 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.287828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.287845 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.287861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.287871 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.287895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.287911 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.287928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.287945 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.287959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.287982 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.287996 | orchestrator | 2026-02-28 01:08:23.288009 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-28 01:08:23.288024 | orchestrator | Saturday 28 February 2026 01:03:50 +0000 (0:00:06.074) 0:01:05.725 ***** 2026-02-28 01:08:23.288039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288055 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.288078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.288091 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.288104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288113 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.288121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.288136 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.288144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288152 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.288161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.288169 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.288177 | orchestrator | 2026-02-28 01:08:23.288185 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-28 01:08:23.288193 | orchestrator | Saturday 28 February 2026 01:03:56 +0000 (0:00:05.779) 0:01:11.504 ***** 2026-02-28 01:08:23.288201 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.288209 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.288217 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.288225 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.288233 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.288241 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.288249 | orchestrator | 2026-02-28 01:08:23.288257 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-28 01:08:23.288270 | orchestrator | Saturday 28 February 2026 01:04:01 +0000 (0:00:05.059) 0:01:16.564 ***** 2026-02-28 01:08:23.288278 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.288286 | orchestrator | 2026-02-28 01:08:23.288294 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-28 01:08:23.288302 | orchestrator | Saturday 28 February 2026 01:04:01 +0000 (0:00:00.265) 0:01:16.830 ***** 2026-02-28 01:08:23.288311 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.288319 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.288327 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.288335 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.288343 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.288351 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.288359 | orchestrator | 2026-02-28 01:08:23.288367 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-28 01:08:23.288389 | orchestrator | Saturday 28 February 2026 01:04:02 +0000 (0:00:01.011) 0:01:17.841 ***** 2026-02-28 01:08:23.288404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288418 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.288431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288443 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.288455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288467 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.288485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.288498 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.288517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.288540 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.288553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.288566 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.288579 | orchestrator | 2026-02-28 01:08:23.288592 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-28 01:08:23.288604 | orchestrator | Saturday 28 February 2026 01:04:07 +0000 (0:00:04.323) 0:01:22.165 ***** 2026-02-28 01:08:23.288617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.288631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.288653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.288682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.288718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.288733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.288748 | orchestrator | 2026-02-28 01:08:23.288761 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-28 01:08:23.288775 | orchestrator | Saturday 28 February 2026 01:04:13 +0000 (0:00:06.666) 0:01:28.831 ***** 2026-02-28 01:08:23.288790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.288813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.288848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.288864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.288878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.288891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.288904 | orchestrator | 2026-02-28 01:08:23.288927 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-28 01:08:23.288941 | orchestrator | Saturday 28 February 2026 01:04:23 +0000 (0:00:09.227) 0:01:38.059 ***** 2026-02-28 01:08:23.288965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.288987 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.289016 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.289046 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.289074 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.289106 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.289150 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289163 | orchestrator | 2026-02-28 01:08:23.289177 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-28 01:08:23.289192 | orchestrator | Saturday 28 February 2026 01:04:26 +0000 (0:00:03.826) 0:01:41.885 ***** 2026-02-28 01:08:23.289205 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289219 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289228 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289236 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:23.289244 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:08:23.289251 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:08:23.289259 | orchestrator | 2026-02-28 01:08:23.289267 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-28 01:08:23.289275 | orchestrator | Saturday 28 February 2026 01:04:29 +0000 (0:00:02.857) 0:01:44.743 ***** 2026-02-28 01:08:23.289283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.289291 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.289314 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.289330 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.289358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.289367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.289376 | orchestrator | 2026-02-28 01:08:23.289384 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-28 01:08:23.289392 | orchestrator | Saturday 28 February 2026 01:04:35 +0000 (0:00:05.950) 0:01:50.693 ***** 2026-02-28 01:08:23.289400 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289408 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289421 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289429 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289437 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289445 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289453 | orchestrator | 2026-02-28 01:08:23.289461 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-28 01:08:23.289469 | orchestrator | Saturday 28 February 2026 01:04:38 +0000 (0:00:02.960) 0:01:53.653 ***** 2026-02-28 01:08:23.289477 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289485 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289492 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289500 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289508 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289516 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289524 | orchestrator | 2026-02-28 01:08:23.289532 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-28 01:08:23.289539 | orchestrator | Saturday 28 February 2026 01:04:41 +0000 (0:00:02.551) 0:01:56.205 ***** 2026-02-28 01:08:23.289547 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289555 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289563 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289571 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289579 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289587 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289594 | orchestrator | 2026-02-28 01:08:23.289602 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-28 01:08:23.289610 | orchestrator | Saturday 28 February 2026 01:04:44 +0000 (0:00:03.801) 0:02:00.006 ***** 2026-02-28 01:08:23.289618 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289626 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289633 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289641 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289649 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289657 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289665 | orchestrator | 2026-02-28 01:08:23.289673 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-28 01:08:23.289681 | orchestrator | Saturday 28 February 2026 01:04:47 +0000 (0:00:02.457) 0:02:02.463 ***** 2026-02-28 01:08:23.289708 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289718 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289726 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289734 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289747 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289755 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289763 | orchestrator | 2026-02-28 01:08:23.289771 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-28 01:08:23.289779 | orchestrator | Saturday 28 February 2026 01:04:50 +0000 (0:00:02.842) 0:02:05.306 ***** 2026-02-28 01:08:23.289787 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289795 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.289803 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289811 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289819 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289827 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.289835 | orchestrator | 2026-02-28 01:08:23.289843 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-28 01:08:23.289858 | orchestrator | Saturday 28 February 2026 01:04:52 +0000 (0:00:02.626) 0:02:07.932 ***** 2026-02-28 01:08:23.289873 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:23.289886 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.289900 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:23.289924 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:23.289938 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.289952 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.289966 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:23.289978 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.289991 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:23.290003 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290053 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-28 01:08:23.290072 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290086 | orchestrator | 2026-02-28 01:08:23.290101 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-28 01:08:23.290116 | orchestrator | Saturday 28 February 2026 01:04:56 +0000 (0:00:04.008) 0:02:11.941 ***** 2026-02-28 01:08:23.290132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.290144 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.290161 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.290195 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.290240 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.290254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.290269 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.290284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.290298 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290312 | orchestrator | 2026-02-28 01:08:23.290320 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-28 01:08:23.290328 | orchestrator | Saturday 28 February 2026 01:05:00 +0000 (0:00:03.918) 0:02:15.860 ***** 2026-02-28 01:08:23.290336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.290345 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.290374 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.290395 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.290412 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.290429 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.290437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.290445 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.290459 | orchestrator | 2026-02-28 01:08:23.290467 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-28 01:08:23.290475 | orchestrator | Saturday 28 February 2026 01:05:05 +0000 (0:00:04.336) 0:02:20.197 ***** 2026-02-28 01:08:23.290483 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290573 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290593 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290607 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290619 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.290632 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.290646 | orchestrator | 2026-02-28 01:08:23.290659 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-28 01:08:23.290674 | orchestrator | Saturday 28 February 2026 01:05:08 +0000 (0:00:03.027) 0:02:23.224 ***** 2026-02-28 01:08:23.290716 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290731 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290739 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290747 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:08:23.290755 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:08:23.290763 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:08:23.290771 | orchestrator | 2026-02-28 01:08:23.290789 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-28 01:08:23.290798 | orchestrator | Saturday 28 February 2026 01:05:13 +0000 (0:00:05.018) 0:02:28.243 ***** 2026-02-28 01:08:23.290806 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290814 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290822 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.290830 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290838 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290846 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.290854 | orchestrator | 2026-02-28 01:08:23.290862 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-28 01:08:23.290870 | orchestrator | Saturday 28 February 2026 01:05:17 +0000 (0:00:03.863) 0:02:32.107 ***** 2026-02-28 01:08:23.290878 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290886 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290894 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290902 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290910 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.290918 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.290926 | orchestrator | 2026-02-28 01:08:23.290934 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-28 01:08:23.290942 | orchestrator | Saturday 28 February 2026 01:05:21 +0000 (0:00:04.208) 0:02:36.315 ***** 2026-02-28 01:08:23.290950 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.290958 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.290966 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.290974 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.290982 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.290990 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.290998 | orchestrator | 2026-02-28 01:08:23.291006 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-28 01:08:23.291014 | orchestrator | Saturday 28 February 2026 01:05:24 +0000 (0:00:02.966) 0:02:39.282 ***** 2026-02-28 01:08:23.291022 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291030 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291038 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291046 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291054 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291062 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291070 | orchestrator | 2026-02-28 01:08:23.291078 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-28 01:08:23.291094 | orchestrator | Saturday 28 February 2026 01:05:27 +0000 (0:00:02.971) 0:02:42.254 ***** 2026-02-28 01:08:23.291102 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291110 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291118 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291126 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291134 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291142 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291150 | orchestrator | 2026-02-28 01:08:23.291158 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-28 01:08:23.291166 | orchestrator | Saturday 28 February 2026 01:05:30 +0000 (0:00:02.949) 0:02:45.204 ***** 2026-02-28 01:08:23.291174 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291182 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291190 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291198 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291206 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291214 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291222 | orchestrator | 2026-02-28 01:08:23.291230 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-28 01:08:23.291238 | orchestrator | Saturday 28 February 2026 01:05:35 +0000 (0:00:04.974) 0:02:50.178 ***** 2026-02-28 01:08:23.291246 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291254 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291262 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291271 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291279 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291287 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291295 | orchestrator | 2026-02-28 01:08:23.291303 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-28 01:08:23.291311 | orchestrator | Saturday 28 February 2026 01:05:38 +0000 (0:00:03.611) 0:02:53.789 ***** 2026-02-28 01:08:23.291319 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:23.291327 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291335 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:23.291343 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291351 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:23.291359 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291367 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:23.291376 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291390 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:23.291399 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291407 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-28 01:08:23.291415 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291423 | orchestrator | 2026-02-28 01:08:23.291431 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-28 01:08:23.291439 | orchestrator | Saturday 28 February 2026 01:05:42 +0000 (0:00:03.690) 0:02:57.479 ***** 2026-02-28 01:08:23.291451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.291467 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.291484 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-28 01:08:23.291500 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.291517 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.291542 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-28 01:08:23.291564 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291572 | orchestrator | 2026-02-28 01:08:23.291580 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-28 01:08:23.291588 | orchestrator | Saturday 28 February 2026 01:05:46 +0000 (0:00:04.092) 0:03:01.572 ***** 2026-02-28 01:08:23.291597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.291605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.291619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-28 01:08:23.291632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.291645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.291654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-28 01:08:23.291662 | orchestrator | 2026-02-28 01:08:23.291670 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-28 01:08:23.291680 | orchestrator | Saturday 28 February 2026 01:05:52 +0000 (0:00:06.224) 0:03:07.796 ***** 2026-02-28 01:08:23.291739 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:08:23.291754 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:08:23.291766 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:08:23.291779 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:08:23.291792 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:08:23.291804 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:08:23.291816 | orchestrator | 2026-02-28 01:08:23.291830 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-28 01:08:23.291843 | orchestrator | Saturday 28 February 2026 01:05:53 +0000 (0:00:00.785) 0:03:08.582 ***** 2026-02-28 01:08:23.291855 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:23.291870 | orchestrator | 2026-02-28 01:08:23.291884 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-28 01:08:23.291898 | orchestrator | Saturday 28 February 2026 01:05:56 +0000 (0:00:02.720) 0:03:11.303 ***** 2026-02-28 01:08:23.291913 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:23.291927 | orchestrator | 2026-02-28 01:08:23.291941 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-28 01:08:23.291955 | orchestrator | Saturday 28 February 2026 01:05:59 +0000 (0:00:02.892) 0:03:14.195 ***** 2026-02-28 01:08:23.291969 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:23.291983 | orchestrator | 2026-02-28 01:08:23.291997 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:23.292011 | orchestrator | Saturday 28 February 2026 01:06:47 +0000 (0:00:48.331) 0:04:02.527 ***** 2026-02-28 01:08:23.292024 | orchestrator | 2026-02-28 01:08:23.292037 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:23.292044 | orchestrator | Saturday 28 February 2026 01:06:47 +0000 (0:00:00.088) 0:04:02.615 ***** 2026-02-28 01:08:23.292057 | orchestrator | 2026-02-28 01:08:23.292064 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:23.292071 | orchestrator | Saturday 28 February 2026 01:06:47 +0000 (0:00:00.317) 0:04:02.933 ***** 2026-02-28 01:08:23.292078 | orchestrator | 2026-02-28 01:08:23.292084 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:23.292091 | orchestrator | Saturday 28 February 2026 01:06:48 +0000 (0:00:00.072) 0:04:03.006 ***** 2026-02-28 01:08:23.292098 | orchestrator | 2026-02-28 01:08:23.292110 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:23.292117 | orchestrator | Saturday 28 February 2026 01:06:48 +0000 (0:00:00.079) 0:04:03.085 ***** 2026-02-28 01:08:23.292124 | orchestrator | 2026-02-28 01:08:23.292131 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-28 01:08:23.292138 | orchestrator | Saturday 28 February 2026 01:06:48 +0000 (0:00:00.074) 0:04:03.160 ***** 2026-02-28 01:08:23.292144 | orchestrator | 2026-02-28 01:08:23.292151 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-28 01:08:23.292158 | orchestrator | Saturday 28 February 2026 01:06:48 +0000 (0:00:00.074) 0:04:03.234 ***** 2026-02-28 01:08:23.292165 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:08:23.292172 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:08:23.292178 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:08:23.292185 | orchestrator | 2026-02-28 01:08:23.292196 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-28 01:08:23.292203 | orchestrator | Saturday 28 February 2026 01:07:19 +0000 (0:00:31.570) 0:04:34.805 ***** 2026-02-28 01:08:23.292210 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:08:23.292217 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:08:23.292223 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:08:23.292230 | orchestrator | 2026-02-28 01:08:23.292237 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:08:23.292244 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:23.292252 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-28 01:08:23.292259 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-28 01:08:23.292266 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:23.292273 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:23.292279 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-28 01:08:23.292286 | orchestrator | 2026-02-28 01:08:23.292293 | orchestrator | 2026-02-28 01:08:23.292300 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:08:23.292307 | orchestrator | Saturday 28 February 2026 01:08:21 +0000 (0:01:01.567) 0:05:36.373 ***** 2026-02-28 01:08:23.292313 | orchestrator | =============================================================================== 2026-02-28 01:08:23.292320 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 61.57s 2026-02-28 01:08:23.292327 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 48.33s 2026-02-28 01:08:23.292334 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.57s 2026-02-28 01:08:23.292340 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.23s 2026-02-28 01:08:23.292347 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.81s 2026-02-28 01:08:23.292358 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.76s 2026-02-28 01:08:23.292365 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.67s 2026-02-28 01:08:23.292371 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.22s 2026-02-28 01:08:23.292378 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 6.07s 2026-02-28 01:08:23.292385 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.95s 2026-02-28 01:08:23.292392 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.78s 2026-02-28 01:08:23.292398 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 5.06s 2026-02-28 01:08:23.292405 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.02s 2026-02-28 01:08:23.292412 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.97s 2026-02-28 01:08:23.292419 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.91s 2026-02-28 01:08:23.292425 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.51s 2026-02-28 01:08:23.292432 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.34s 2026-02-28 01:08:23.292439 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.32s 2026-02-28 01:08:23.292446 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.21s 2026-02-28 01:08:23.292452 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 4.09s 2026-02-28 01:08:23.292459 | orchestrator | 2026-02-28 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:26.318533 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:26.322171 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:26.324640 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:26.326148 | orchestrator | 2026-02-28 01:08:26 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:26.326464 | orchestrator | 2026-02-28 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:29.366384 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:29.367030 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:29.368681 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:29.369958 | orchestrator | 2026-02-28 01:08:29 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:29.370068 | orchestrator | 2026-02-28 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:32.407122 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:32.408747 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:32.411048 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:32.412430 | orchestrator | 2026-02-28 01:08:32 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:32.412480 | orchestrator | 2026-02-28 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:35.466131 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:35.467478 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:35.469315 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:35.471423 | orchestrator | 2026-02-28 01:08:35 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:35.471488 | orchestrator | 2026-02-28 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:38.515034 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:38.515974 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:38.517143 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:38.518616 | orchestrator | 2026-02-28 01:08:38 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:38.518683 | orchestrator | 2026-02-28 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:41.551620 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:41.552453 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:41.553821 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:41.555471 | orchestrator | 2026-02-28 01:08:41 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:41.555848 | orchestrator | 2026-02-28 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:44.622194 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:44.623855 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:44.627267 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:44.627347 | orchestrator | 2026-02-28 01:08:44 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:44.627363 | orchestrator | 2026-02-28 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:47.674822 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:47.678400 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:47.683234 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:47.685605 | orchestrator | 2026-02-28 01:08:47 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:47.687455 | orchestrator | 2026-02-28 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:50.730281 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:50.731101 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:50.732531 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:50.733841 | orchestrator | 2026-02-28 01:08:50 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:50.733916 | orchestrator | 2026-02-28 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:53.774587 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:53.775164 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:53.776154 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:53.777176 | orchestrator | 2026-02-28 01:08:53 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:53.777208 | orchestrator | 2026-02-28 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:56.827518 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:56.830953 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:56.831825 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:56.832602 | orchestrator | 2026-02-28 01:08:56 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:56.832621 | orchestrator | 2026-02-28 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:08:59.896177 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:08:59.898008 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:08:59.898505 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:08:59.899293 | orchestrator | 2026-02-28 01:08:59 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:08:59.899317 | orchestrator | 2026-02-28 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:02.970645 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:02.970816 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:02.971864 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:02.972995 | orchestrator | 2026-02-28 01:09:02 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:02.973049 | orchestrator | 2026-02-28 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:06.025843 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:06.029488 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:06.030889 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:06.032843 | orchestrator | 2026-02-28 01:09:06 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:06.032868 | orchestrator | 2026-02-28 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:09.073798 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:09.074657 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:09.075631 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:09.076599 | orchestrator | 2026-02-28 01:09:09 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:09.077229 | orchestrator | 2026-02-28 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:12.115234 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:12.117482 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:12.118303 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:12.119438 | orchestrator | 2026-02-28 01:09:12 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:12.119496 | orchestrator | 2026-02-28 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:15.172211 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:15.172762 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:15.173810 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:15.175074 | orchestrator | 2026-02-28 01:09:15 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:15.175128 | orchestrator | 2026-02-28 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:18.279647 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:18.279767 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:18.279779 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:18.279787 | orchestrator | 2026-02-28 01:09:18 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:18.279794 | orchestrator | 2026-02-28 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:21.453152 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:21.454165 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:21.455460 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:21.456602 | orchestrator | 2026-02-28 01:09:21 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:21.456720 | orchestrator | 2026-02-28 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:24.505576 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:24.506946 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:24.508142 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:24.509334 | orchestrator | 2026-02-28 01:09:24 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:24.509560 | orchestrator | 2026-02-28 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:27.547612 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:27.548772 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:27.550221 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:27.551269 | orchestrator | 2026-02-28 01:09:27 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:27.551319 | orchestrator | 2026-02-28 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:30.608431 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:30.609765 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:30.610994 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:30.613958 | orchestrator | 2026-02-28 01:09:30 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:30.614296 | orchestrator | 2026-02-28 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:33.648089 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:33.649022 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:33.650209 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:33.651413 | orchestrator | 2026-02-28 01:09:33 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:33.651647 | orchestrator | 2026-02-28 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:36.700482 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:36.703287 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:36.706110 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:36.708262 | orchestrator | 2026-02-28 01:09:36 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:36.708537 | orchestrator | 2026-02-28 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:39.762255 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:39.763063 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:39.764647 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:39.765492 | orchestrator | 2026-02-28 01:09:39 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:39.765693 | orchestrator | 2026-02-28 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:42.828542 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:42.833815 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:42.838538 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:42.842194 | orchestrator | 2026-02-28 01:09:42 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:42.842273 | orchestrator | 2026-02-28 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:45.885718 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:45.886096 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state STARTED 2026-02-28 01:09:45.890338 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:45.893251 | orchestrator | 2026-02-28 01:09:45 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:45.893313 | orchestrator | 2026-02-28 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:48.944248 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:48.945081 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:09:48.950252 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 74149736-33fd-4f01-bbeb-6ba573075c69 is in state SUCCESS 2026-02-28 01:09:48.952036 | orchestrator | 2026-02-28 01:09:48.952097 | orchestrator | 2026-02-28 01:09:48.952106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:09:48.952113 | orchestrator | 2026-02-28 01:09:48.952119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:09:48.952125 | orchestrator | Saturday 28 February 2026 01:05:33 +0000 (0:00:00.835) 0:00:00.835 ***** 2026-02-28 01:09:48.952131 | orchestrator | ok: [testbed-manager] 2026-02-28 01:09:48.952138 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:09:48.952144 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:09:48.952149 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:09:48.952154 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:09:48.952160 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:09:48.952165 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:09:48.952170 | orchestrator | 2026-02-28 01:09:48.952176 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:09:48.952181 | orchestrator | Saturday 28 February 2026 01:05:35 +0000 (0:00:01.976) 0:00:02.811 ***** 2026-02-28 01:09:48.952187 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952193 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952198 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952203 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952208 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952213 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952219 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-28 01:09:48.952224 | orchestrator | 2026-02-28 01:09:48.952229 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-28 01:09:48.952234 | orchestrator | 2026-02-28 01:09:48.952239 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-28 01:09:48.952245 | orchestrator | Saturday 28 February 2026 01:05:37 +0000 (0:00:01.513) 0:00:04.325 ***** 2026-02-28 01:09:48.952265 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:09:48.952272 | orchestrator | 2026-02-28 01:09:48.952321 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-28 01:09:48.952371 | orchestrator | Saturday 28 February 2026 01:05:39 +0000 (0:00:02.506) 0:00:06.831 ***** 2026-02-28 01:09:48.952383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:09:48.952497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952507 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952634 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:09:48.952720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952769 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.952793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.952893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.952909 | orchestrator | 2026-02-28 01:09:48.952918 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-28 01:09:48.952932 | orchestrator | Saturday 28 February 2026 01:05:44 +0000 (0:00:05.005) 0:00:11.836 ***** 2026-02-28 01:09:48.952940 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:09:48.952950 | orchestrator | 2026-02-28 01:09:48.952962 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-28 01:09:48.952971 | orchestrator | Saturday 28 February 2026 01:05:48 +0000 (0:00:03.967) 0:00:15.803 ***** 2026-02-28 01:09:48.953058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953076 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:09:48.953085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953235 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.953243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953356 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:09:48.953454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.953473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.953482 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.954184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.954269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.954280 | orchestrator | 2026-02-28 01:09:48.954288 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-28 01:09:48.954295 | orchestrator | Saturday 28 February 2026 01:05:57 +0000 (0:00:08.820) 0:00:24.624 ***** 2026-02-28 01:09:48.954311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 01:09:48.954320 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 01:09:48.954415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954429 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954524 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.954531 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.954538 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.954545 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.954555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954579 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.954586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954606 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.954612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954642 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.954649 | orchestrator | 2026-02-28 01:09:48.954655 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-28 01:09:48.954662 | orchestrator | Saturday 28 February 2026 01:06:00 +0000 (0:00:02.712) 0:00:27.337 ***** 2026-02-28 01:09:48.954691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-28 01:09:48.954699 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954713 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-28 01:09:48.954725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954774 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.954781 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.954792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954839 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.954854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-28 01:09:48.954919 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.954935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.954968 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.954984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.954996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.955009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.955016 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.955022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-28 01:09:48.955029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.955040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-28 01:09:48.955046 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.955053 | orchestrator | 2026-02-28 01:09:48.955059 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-28 01:09:48.955065 | orchestrator | Saturday 28 February 2026 01:06:02 +0000 (0:00:02.933) 0:00:30.270 ***** 2026-02-28 01:09:48.955072 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:09:48.955082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955124 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955172 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.955190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955231 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:09:48.955239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.955284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.955311 | orchestrator | 2026-02-28 01:09:48.955317 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-28 01:09:48.955324 | orchestrator | Saturday 28 February 2026 01:06:11 +0000 (0:00:08.931) 0:00:39.202 ***** 2026-02-28 01:09:48.955330 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:09:48.955337 | orchestrator | 2026-02-28 01:09:48.955343 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-28 01:09:48.955353 | orchestrator | Saturday 28 February 2026 01:06:13 +0000 (0:00:01.620) 0:00:40.823 ***** 2026-02-28 01:09:48.955360 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955367 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955380 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955387 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955397 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955408 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955435 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104384, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.955484 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955505 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955518 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955528 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955537 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955547 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955562 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955573 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955592 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1104420, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.267903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.955607 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955619 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955628 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955637 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955652 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955662 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955744 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955756 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955767 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955778 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1104371, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2616537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.955809 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955817 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955827 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955834 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955840 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955847 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.955853 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956253 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956277 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956291 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956298 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956305 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956311 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956318 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956339 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104405, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2659032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.956346 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956357 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956363 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956370 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956377 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956383 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956399 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956406 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956416 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956422 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956429 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956436 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956450 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104364, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.259692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.956462 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956469 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956479 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956486 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956492 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956499 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956513 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956528 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956542 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956567 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956577 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956588 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956599 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104385, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2633202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.956617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956634 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956644 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956659 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956720 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956734 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956751 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956763 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956781 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956792 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956809 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1104400, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2655413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.956820 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956831 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956848 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956860 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956878 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956890 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956907 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956919 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956930 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956944 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956952 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956964 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956973 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.956985 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104390, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2637024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.956993 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957005 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957013 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957020 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957031 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957039 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957050 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957058 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957070 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957077 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.957086 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957094 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957106 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957113 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957120 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.957133 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104380, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.262423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957140 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957152 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.957160 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957168 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957176 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957187 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957195 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957202 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.957215 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957236 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.957242 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-28 01:09:48.957249 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.957255 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104417, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957262 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104359, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2587957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957272 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104437, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957279 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104413, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2669032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957289 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104369, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2599032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1104363, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.258903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957307 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104397, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957314 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104393, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2641835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957320 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104434, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2695854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-28 01:09:48.957327 | orchestrator | 2026-02-28 01:09:48.957334 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-28 01:09:48.957340 | orchestrator | Saturday 28 February 2026 01:06:53 +0000 (0:00:39.687) 0:01:20.511 ***** 2026-02-28 01:09:48.957347 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:09:48.957353 | orchestrator | 2026-02-28 01:09:48.957363 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-28 01:09:48.957370 | orchestrator | Saturday 28 February 2026 01:06:54 +0000 (0:00:01.540) 0:01:22.051 ***** 2026-02-28 01:09:48.957376 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957389 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957396 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957403 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957409 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:09:48.957416 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957422 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957433 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957446 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957452 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:09:48.957458 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957471 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957477 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957484 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957490 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-28 01:09:48.957496 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957506 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957512 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957525 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957531 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:09:48.957538 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957550 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957563 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957569 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:09:48.957575 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957582 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957588 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957601 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957607 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-28 01:09:48.957613 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.957620 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957626 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-28 01:09:48.957633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-28 01:09:48.957639 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-28 01:09:48.957645 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:09:48.957652 | orchestrator | 2026-02-28 01:09:48.957658 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-28 01:09:48.957664 | orchestrator | Saturday 28 February 2026 01:06:58 +0000 (0:00:03.822) 0:01:25.873 ***** 2026-02-28 01:09:48.957694 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:09:48.957703 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.957714 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:09:48.957722 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.957729 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:09:48.957738 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.957751 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:09:48.957772 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.957782 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:09:48.957792 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.957803 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-28 01:09:48.957814 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.957824 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-28 01:09:48.957834 | orchestrator | 2026-02-28 01:09:48.957840 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-28 01:09:48.957846 | orchestrator | Saturday 28 February 2026 01:07:20 +0000 (0:00:22.256) 0:01:48.130 ***** 2026-02-28 01:09:48.957853 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:09:48.957864 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.957871 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:09:48.957877 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.957884 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:09:48.957890 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.957896 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:09:48.957903 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.957909 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:09:48.957915 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.957921 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-28 01:09:48.957928 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.957934 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-28 01:09:48.957940 | orchestrator | 2026-02-28 01:09:48.957947 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-28 01:09:48.957953 | orchestrator | Saturday 28 February 2026 01:07:25 +0000 (0:00:04.823) 0:01:52.954 ***** 2026-02-28 01:09:48.957959 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:09:48.957967 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.957973 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:09:48.957984 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:09:48.957991 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.957997 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-28 01:09:48.958004 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.958010 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:09:48.958068 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958075 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:09:48.958082 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958088 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-28 01:09:48.958094 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958101 | orchestrator | 2026-02-28 01:09:48.958112 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-28 01:09:48.958119 | orchestrator | Saturday 28 February 2026 01:07:30 +0000 (0:00:04.959) 0:01:57.913 ***** 2026-02-28 01:09:48.958125 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:09:48.958132 | orchestrator | 2026-02-28 01:09:48.958138 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-28 01:09:48.958144 | orchestrator | Saturday 28 February 2026 01:07:32 +0000 (0:00:02.098) 0:02:00.012 ***** 2026-02-28 01:09:48.958151 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.958157 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.958164 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.958170 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.958176 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958183 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958189 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958195 | orchestrator | 2026-02-28 01:09:48.958202 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-28 01:09:48.958208 | orchestrator | Saturday 28 February 2026 01:07:34 +0000 (0:00:02.010) 0:02:02.023 ***** 2026-02-28 01:09:48.958215 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.958221 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958227 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958233 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:48.958240 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958246 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:48.958253 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:48.958259 | orchestrator | 2026-02-28 01:09:48.958265 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-28 01:09:48.958271 | orchestrator | Saturday 28 February 2026 01:07:37 +0000 (0:00:03.058) 0:02:05.081 ***** 2026-02-28 01:09:48.958278 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958285 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958291 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958298 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.958304 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.958310 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.958316 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958322 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.958333 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958339 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958345 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958352 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958358 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-28 01:09:48.958365 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958371 | orchestrator | 2026-02-28 01:09:48.958377 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-28 01:09:48.958384 | orchestrator | Saturday 28 February 2026 01:07:39 +0000 (0:00:02.098) 0:02:07.180 ***** 2026-02-28 01:09:48.958390 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:09:48.958396 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.958403 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:09:48.958409 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.958416 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:09:48.958428 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958434 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:09:48.958440 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.958447 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:09:48.958453 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958463 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-28 01:09:48.958469 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958476 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-28 01:09:48.958482 | orchestrator | 2026-02-28 01:09:48.958488 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-28 01:09:48.958495 | orchestrator | Saturday 28 February 2026 01:07:42 +0000 (0:00:02.134) 0:02:09.315 ***** 2026-02-28 01:09:48.958501 | orchestrator | [WARNING]: Skipped 2026-02-28 01:09:48.958508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-28 01:09:48.958514 | orchestrator | due to this access issue: 2026-02-28 01:09:48.958520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-28 01:09:48.958527 | orchestrator | not a directory 2026-02-28 01:09:48.958533 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-28 01:09:48.958540 | orchestrator | 2026-02-28 01:09:48.958546 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-28 01:09:48.958552 | orchestrator | Saturday 28 February 2026 01:07:43 +0000 (0:00:01.275) 0:02:10.591 ***** 2026-02-28 01:09:48.958559 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.958565 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.958571 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.958578 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.958584 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958590 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958597 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958603 | orchestrator | 2026-02-28 01:09:48.958609 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-28 01:09:48.958616 | orchestrator | Saturday 28 February 2026 01:07:44 +0000 (0:00:01.203) 0:02:11.794 ***** 2026-02-28 01:09:48.958622 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.958628 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:09:48.958635 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:09:48.958641 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:09:48.958647 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:09:48.958653 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:09:48.958660 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:09:48.958666 | orchestrator | 2026-02-28 01:09:48.958691 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-28 01:09:48.958698 | orchestrator | Saturday 28 February 2026 01:07:45 +0000 (0:00:01.189) 0:02:12.984 ***** 2026-02-28 01:09:48.958706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-28 01:09:48.958739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958809 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-28 01:09:48.958855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958893 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958915 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958950 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-28 01:09:48.958958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-28 01:09:48.958987 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.958994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.959003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.959010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-28 01:09:48.959017 | orchestrator | 2026-02-28 01:09:48.959023 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-28 01:09:48.959030 | orchestrator | Saturday 28 February 2026 01:07:49 +0000 (0:00:04.053) 0:02:17.038 ***** 2026-02-28 01:09:48.959036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-28 01:09:48.959043 | orchestrator | skipping: [testbed-manager] 2026-02-28 01:09:48.959049 | orchestrator | 2026-02-28 01:09:48.959056 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959062 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:01.459) 0:02:18.497 ***** 2026-02-28 01:09:48.959075 | orchestrator | 2026-02-28 01:09:48.959082 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959088 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.089) 0:02:18.587 ***** 2026-02-28 01:09:48.959094 | orchestrator | 2026-02-28 01:09:48.959101 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959107 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.079) 0:02:18.666 ***** 2026-02-28 01:09:48.959113 | orchestrator | 2026-02-28 01:09:48.959119 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959126 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.071) 0:02:18.738 ***** 2026-02-28 01:09:48.959132 | orchestrator | 2026-02-28 01:09:48.959138 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959144 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.289) 0:02:19.028 ***** 2026-02-28 01:09:48.959151 | orchestrator | 2026-02-28 01:09:48.959157 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959163 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.134) 0:02:19.163 ***** 2026-02-28 01:09:48.959169 | orchestrator | 2026-02-28 01:09:48.959175 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-28 01:09:48.959182 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.073) 0:02:19.237 ***** 2026-02-28 01:09:48.959188 | orchestrator | 2026-02-28 01:09:48.959195 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-28 01:09:48.959201 | orchestrator | Saturday 28 February 2026 01:07:52 +0000 (0:00:00.101) 0:02:19.339 ***** 2026-02-28 01:09:48.959207 | orchestrator | changed: [testbed-manager] 2026-02-28 01:09:48.959213 | orchestrator | 2026-02-28 01:09:48.959220 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-28 01:09:48.959229 | orchestrator | Saturday 28 February 2026 01:08:12 +0000 (0:00:20.608) 0:02:39.947 ***** 2026-02-28 01:09:48.959236 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:48.959242 | orchestrator | changed: [testbed-manager] 2026-02-28 01:09:48.959249 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:48.959255 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:09:48.959261 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:09:48.959268 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:48.959274 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:09:48.959280 | orchestrator | 2026-02-28 01:09:48.959287 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-28 01:09:48.959293 | orchestrator | Saturday 28 February 2026 01:08:29 +0000 (0:00:16.428) 0:02:56.376 ***** 2026-02-28 01:09:48.959299 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:48.959306 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:48.959312 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:48.959318 | orchestrator | 2026-02-28 01:09:48.959325 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-28 01:09:48.959331 | orchestrator | Saturday 28 February 2026 01:08:41 +0000 (0:00:12.805) 0:03:09.181 ***** 2026-02-28 01:09:48.959337 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:48.959344 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:48.959350 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:48.959356 | orchestrator | 2026-02-28 01:09:48.959362 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-28 01:09:48.959369 | orchestrator | Saturday 28 February 2026 01:08:50 +0000 (0:00:08.140) 0:03:17.322 ***** 2026-02-28 01:09:48.959375 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:09:48.959381 | orchestrator | changed: [testbed-manager] 2026-02-28 01:09:48.959388 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:48.959394 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:48.959401 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:09:48.959411 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:09:48.959417 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:48.959424 | orchestrator | 2026-02-28 01:09:48.959430 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-28 01:09:48.959439 | orchestrator | Saturday 28 February 2026 01:09:07 +0000 (0:00:17.047) 0:03:34.369 ***** 2026-02-28 01:09:48.959446 | orchestrator | changed: [testbed-manager] 2026-02-28 01:09:48.959452 | orchestrator | 2026-02-28 01:09:48.959458 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-28 01:09:48.959465 | orchestrator | Saturday 28 February 2026 01:09:17 +0000 (0:00:10.517) 0:03:44.887 ***** 2026-02-28 01:09:48.959471 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:09:48.959477 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:09:48.959483 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:09:48.959490 | orchestrator | 2026-02-28 01:09:48.959496 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-28 01:09:48.959502 | orchestrator | Saturday 28 February 2026 01:09:31 +0000 (0:00:13.830) 0:03:58.718 ***** 2026-02-28 01:09:48.959509 | orchestrator | changed: [testbed-manager] 2026-02-28 01:09:48.959515 | orchestrator | 2026-02-28 01:09:48.959521 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-28 01:09:48.959527 | orchestrator | Saturday 28 February 2026 01:09:39 +0000 (0:00:07.867) 0:04:06.586 ***** 2026-02-28 01:09:48.959534 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:09:48.959540 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:09:48.959547 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:09:48.959553 | orchestrator | 2026-02-28 01:09:48.959559 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:09:48.959566 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-28 01:09:48.959573 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:09:48.959579 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:09:48.959586 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:09:48.959592 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:09:48.959598 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:09:48.959605 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:09:48.959611 | orchestrator | 2026-02-28 01:09:48.959617 | orchestrator | 2026-02-28 01:09:48.959624 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:09:48.959630 | orchestrator | Saturday 28 February 2026 01:09:46 +0000 (0:00:07.084) 0:04:13.670 ***** 2026-02-28 01:09:48.959637 | orchestrator | =============================================================================== 2026-02-28 01:09:48.959643 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.69s 2026-02-28 01:09:48.959650 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.26s 2026-02-28 01:09:48.959656 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.61s 2026-02-28 01:09:48.959662 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.05s 2026-02-28 01:09:48.959692 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.43s 2026-02-28 01:09:48.959705 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.83s 2026-02-28 01:09:48.959723 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.81s 2026-02-28 01:09:48.959734 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.52s 2026-02-28 01:09:48.959745 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.93s 2026-02-28 01:09:48.959756 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 8.82s 2026-02-28 01:09:48.959767 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 8.14s 2026-02-28 01:09:48.959778 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.87s 2026-02-28 01:09:48.959789 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.08s 2026-02-28 01:09:48.959798 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.01s 2026-02-28 01:09:48.959805 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.96s 2026-02-28 01:09:48.959811 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.82s 2026-02-28 01:09:48.959817 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.05s 2026-02-28 01:09:48.959824 | orchestrator | prometheus : include_tasks ---------------------------------------------- 3.97s 2026-02-28 01:09:48.959830 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.82s 2026-02-28 01:09:48.959839 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.06s 2026-02-28 01:09:48.959850 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:48.959874 | orchestrator | 2026-02-28 01:09:48 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:48.959884 | orchestrator | 2026-02-28 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:51.993183 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:51.995119 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:09:51.997203 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:51.999353 | orchestrator | 2026-02-28 01:09:51 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:51.999402 | orchestrator | 2026-02-28 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:55.050597 | orchestrator | 2026-02-28 01:09:55 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:55.052521 | orchestrator | 2026-02-28 01:09:55 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:09:55.053832 | orchestrator | 2026-02-28 01:09:55 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:55.055839 | orchestrator | 2026-02-28 01:09:55 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:55.055888 | orchestrator | 2026-02-28 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:09:58.100097 | orchestrator | 2026-02-28 01:09:58 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:09:58.101657 | orchestrator | 2026-02-28 01:09:58 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:09:58.104329 | orchestrator | 2026-02-28 01:09:58 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:09:58.106314 | orchestrator | 2026-02-28 01:09:58 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:09:58.106388 | orchestrator | 2026-02-28 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:01.146887 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:01.149039 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:01.150870 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:01.152713 | orchestrator | 2026-02-28 01:10:01 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:01.152854 | orchestrator | 2026-02-28 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:04.194612 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:04.195129 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:04.196420 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:04.197503 | orchestrator | 2026-02-28 01:10:04 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:04.197537 | orchestrator | 2026-02-28 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:07.238230 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:07.239116 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:07.241595 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:07.245804 | orchestrator | 2026-02-28 01:10:07 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:07.245894 | orchestrator | 2026-02-28 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:10.283389 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:10.283924 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:10.284524 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:10.285714 | orchestrator | 2026-02-28 01:10:10 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:10.285756 | orchestrator | 2026-02-28 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:13.325274 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:13.325858 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:13.327079 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:13.328018 | orchestrator | 2026-02-28 01:10:13 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:13.328060 | orchestrator | 2026-02-28 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:16.363768 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:16.364592 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:16.366649 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:16.368906 | orchestrator | 2026-02-28 01:10:16 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:16.368947 | orchestrator | 2026-02-28 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:19.416991 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:19.418907 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:19.420399 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:19.421284 | orchestrator | 2026-02-28 01:10:19 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:19.421328 | orchestrator | 2026-02-28 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:22.458268 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:22.458858 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:22.459842 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:22.463103 | orchestrator | 2026-02-28 01:10:22 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:22.463166 | orchestrator | 2026-02-28 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:25.498337 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:25.499831 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:25.501240 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:25.503402 | orchestrator | 2026-02-28 01:10:25 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:25.503449 | orchestrator | 2026-02-28 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:28.536310 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:28.537651 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:28.538846 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:28.541070 | orchestrator | 2026-02-28 01:10:28 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:28.541124 | orchestrator | 2026-02-28 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:31.582317 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:31.583139 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:31.586470 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:31.587299 | orchestrator | 2026-02-28 01:10:31 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:31.587371 | orchestrator | 2026-02-28 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:34.651974 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:34.652271 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:34.653587 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:34.655605 | orchestrator | 2026-02-28 01:10:34 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:34.655682 | orchestrator | 2026-02-28 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:37.705363 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:37.706962 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:37.708880 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:37.711603 | orchestrator | 2026-02-28 01:10:37 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:37.711706 | orchestrator | 2026-02-28 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:40.744702 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:40.748065 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:40.749218 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:40.750195 | orchestrator | 2026-02-28 01:10:40 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:40.750229 | orchestrator | 2026-02-28 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:43.800289 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:43.802445 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:43.805044 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:43.808045 | orchestrator | 2026-02-28 01:10:43 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:43.808103 | orchestrator | 2026-02-28 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:46.852318 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:46.853448 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:46.855182 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:46.856353 | orchestrator | 2026-02-28 01:10:46 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:46.856406 | orchestrator | 2026-02-28 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:49.903417 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:49.905567 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:49.906344 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:49.908115 | orchestrator | 2026-02-28 01:10:49 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:49.908680 | orchestrator | 2026-02-28 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:52.955886 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:52.956272 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:52.957316 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:52.958327 | orchestrator | 2026-02-28 01:10:52 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:52.958378 | orchestrator | 2026-02-28 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:56.028463 | orchestrator | 2026-02-28 01:10:56 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:56.029769 | orchestrator | 2026-02-28 01:10:56 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:56.031368 | orchestrator | 2026-02-28 01:10:56 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:56.032196 | orchestrator | 2026-02-28 01:10:56 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:56.032471 | orchestrator | 2026-02-28 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:10:59.070772 | orchestrator | 2026-02-28 01:10:59 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:10:59.071373 | orchestrator | 2026-02-28 01:10:59 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:10:59.073875 | orchestrator | 2026-02-28 01:10:59 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:10:59.076361 | orchestrator | 2026-02-28 01:10:59 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:10:59.076415 | orchestrator | 2026-02-28 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:02.114010 | orchestrator | 2026-02-28 01:11:02 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:11:02.115050 | orchestrator | 2026-02-28 01:11:02 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:11:02.116023 | orchestrator | 2026-02-28 01:11:02 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:11:02.117143 | orchestrator | 2026-02-28 01:11:02 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:11:02.117180 | orchestrator | 2026-02-28 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:05.151284 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:11:05.152411 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:11:05.154250 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:11:05.155728 | orchestrator | 2026-02-28 01:11:05 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state STARTED 2026-02-28 01:11:05.155778 | orchestrator | 2026-02-28 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:11:08.192442 | orchestrator | 2026-02-28 01:11:08 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state STARTED 2026-02-28 01:11:08.193829 | orchestrator | 2026-02-28 01:11:08 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:13:08.306828 | orchestrator | 2026-02-28 01:13:08 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:08.307095 | orchestrator | 2026-02-28 01:13:08 | INFO  | Task 5b51cfa8-6c41-4e14-8599-164f00d1dc99 is in state SUCCESS 2026-02-28 01:13:08.307114 | orchestrator | 2026-02-28 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:08.311405 | orchestrator | 2026-02-28 01:13:08.311471 | orchestrator | 2026-02-28 01:13:08.311480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:08.311487 | orchestrator | 2026-02-28 01:13:08.311494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:08.311502 | orchestrator | Saturday 28 February 2026 01:07:34 +0000 (0:00:00.749) 0:00:00.749 ***** 2026-02-28 01:13:08.311510 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:08.311519 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:08.311525 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:08.311532 | orchestrator | 2026-02-28 01:13:08.311540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:08.311547 | orchestrator | Saturday 28 February 2026 01:07:35 +0000 (0:00:00.568) 0:00:01.318 ***** 2026-02-28 01:13:08.311554 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-28 01:13:08.311580 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-28 01:13:08.311588 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-28 01:13:08.311595 | orchestrator | 2026-02-28 01:13:08.311603 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-28 01:13:08.311611 | orchestrator | 2026-02-28 01:13:08.311618 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:13:08.311626 | orchestrator | Saturday 28 February 2026 01:07:36 +0000 (0:00:00.888) 0:00:02.206 ***** 2026-02-28 01:13:08.311633 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:08.311641 | orchestrator | 2026-02-28 01:13:08.311647 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-28 01:13:08.311653 | orchestrator | Saturday 28 February 2026 01:07:37 +0000 (0:00:00.995) 0:00:03.202 ***** 2026-02-28 01:13:08.311659 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-28 01:13:08.311665 | orchestrator | 2026-02-28 01:13:08.311671 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-28 01:13:08.311678 | orchestrator | Saturday 28 February 2026 01:07:41 +0000 (0:00:03.911) 0:00:07.113 ***** 2026-02-28 01:13:08.311684 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-28 01:13:08.311690 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-28 01:13:08.311696 | orchestrator | 2026-02-28 01:13:08.311702 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-28 01:13:08.311710 | orchestrator | Saturday 28 February 2026 01:07:48 +0000 (0:00:07.392) 0:00:14.506 ***** 2026-02-28 01:13:08.311718 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:13:08.311726 | orchestrator | 2026-02-28 01:13:08.311733 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-28 01:13:08.311741 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:03.248) 0:00:17.754 ***** 2026-02-28 01:13:08.311748 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-28 01:13:08.311756 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:13:08.311764 | orchestrator | 2026-02-28 01:13:08.311772 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-28 01:13:08.311779 | orchestrator | Saturday 28 February 2026 01:07:56 +0000 (0:00:04.734) 0:00:22.489 ***** 2026-02-28 01:13:08.311785 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:13:08.311792 | orchestrator | 2026-02-28 01:13:08.311799 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-28 01:13:08.311807 | orchestrator | Saturday 28 February 2026 01:07:59 +0000 (0:00:03.483) 0:00:25.973 ***** 2026-02-28 01:13:08.311847 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-28 01:13:08.311855 | orchestrator | 2026-02-28 01:13:08.311863 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-28 01:13:08.311870 | orchestrator | Saturday 28 February 2026 01:08:03 +0000 (0:00:03.132) 0:00:29.105 ***** 2026-02-28 01:13:08.311899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.311910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.311918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.311933 | orchestrator | 2026-02-28 01:13:08.311940 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:13:08.311948 | orchestrator | Saturday 28 February 2026 01:08:07 +0000 (0:00:04.007) 0:00:33.113 ***** 2026-02-28 01:13:08.311956 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:08.311963 | orchestrator | 2026-02-28 01:13:08.311971 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-28 01:13:08.311983 | orchestrator | Saturday 28 February 2026 01:08:07 +0000 (0:00:00.691) 0:00:33.804 ***** 2026-02-28 01:13:08.311992 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.312000 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:08.312009 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:08.312017 | orchestrator | 2026-02-28 01:13:08.312026 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-28 01:13:08.312035 | orchestrator | Saturday 28 February 2026 01:08:11 +0000 (0:00:04.018) 0:00:37.823 ***** 2026-02-28 01:13:08.312043 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:08.312050 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:08.312058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:08.312066 | orchestrator | 2026-02-28 01:13:08.312073 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-28 01:13:08.312080 | orchestrator | Saturday 28 February 2026 01:08:14 +0000 (0:00:02.255) 0:00:40.079 ***** 2026-02-28 01:13:08.312088 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:08.312095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:08.312103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:08.312110 | orchestrator | 2026-02-28 01:13:08.312117 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-28 01:13:08.312123 | orchestrator | Saturday 28 February 2026 01:08:16 +0000 (0:00:02.431) 0:00:42.510 ***** 2026-02-28 01:13:08.312129 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:08.312135 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:08.312142 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:08.312155 | orchestrator | 2026-02-28 01:13:08.312162 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-28 01:13:08.312170 | orchestrator | Saturday 28 February 2026 01:08:18 +0000 (0:00:01.961) 0:00:44.471 ***** 2026-02-28 01:13:08.312177 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312184 | orchestrator | 2026-02-28 01:13:08.312192 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-28 01:13:08.312200 | orchestrator | Saturday 28 February 2026 01:08:18 +0000 (0:00:00.225) 0:00:44.697 ***** 2026-02-28 01:13:08.312207 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312214 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312221 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312229 | orchestrator | 2026-02-28 01:13:08.312236 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:13:08.312244 | orchestrator | Saturday 28 February 2026 01:08:19 +0000 (0:00:00.681) 0:00:45.378 ***** 2026-02-28 01:13:08.312251 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:08.312259 | orchestrator | 2026-02-28 01:13:08.312266 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-28 01:13:08.312274 | orchestrator | Saturday 28 February 2026 01:08:20 +0000 (0:00:01.307) 0:00:46.686 ***** 2026-02-28 01:13:08.312282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.312295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.312310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.312318 | orchestrator | 2026-02-28 01:13:08.312325 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-28 01:13:08.312332 | orchestrator | Saturday 28 February 2026 01:08:27 +0000 (0:00:06.554) 0:00:53.240 ***** 2026-02-28 01:13:08.312345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:13:08.312359 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:13:08.312375 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:13:08.312396 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312404 | orchestrator | 2026-02-28 01:13:08.312411 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-28 01:13:08.312419 | orchestrator | Saturday 28 February 2026 01:08:33 +0000 (0:00:06.521) 0:00:59.762 ***** 2026-02-28 01:13:08.312431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:13:08.312438 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:13:08.312455 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-28 01:13:08.312495 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312503 | orchestrator | 2026-02-28 01:13:08.312510 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-28 01:13:08.312518 | orchestrator | Saturday 28 February 2026 01:08:39 +0000 (0:00:05.930) 0:01:05.692 ***** 2026-02-28 01:13:08.312525 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312532 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312539 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312547 | orchestrator | 2026-02-28 01:13:08.312554 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-28 01:13:08.312602 | orchestrator | Saturday 28 February 2026 01:08:46 +0000 (0:00:06.508) 0:01:12.201 ***** 2026-02-28 01:13:08.312611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.312630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.312735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.312746 | orchestrator | 2026-02-28 01:13:08.312753 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-28 01:13:08.312759 | orchestrator | Saturday 28 February 2026 01:08:52 +0000 (0:00:05.900) 0:01:18.101 ***** 2026-02-28 01:13:08.312766 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:08.312773 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.312781 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:08.312788 | orchestrator | 2026-02-28 01:13:08.312795 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-28 01:13:08.312803 | orchestrator | Saturday 28 February 2026 01:09:06 +0000 (0:00:14.401) 0:01:32.502 ***** 2026-02-28 01:13:08.312810 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312817 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312825 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312832 | orchestrator | 2026-02-28 01:13:08.312839 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-28 01:13:08.312851 | orchestrator | Saturday 28 February 2026 01:09:12 +0000 (0:00:05.796) 0:01:38.299 ***** 2026-02-28 01:13:08.312859 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312867 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312874 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312881 | orchestrator | 2026-02-28 01:13:08.312889 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-28 01:13:08.312895 | orchestrator | Saturday 28 February 2026 01:09:19 +0000 (0:00:07.477) 0:01:45.777 ***** 2026-02-28 01:13:08.312902 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312914 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312922 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312930 | orchestrator | 2026-02-28 01:13:08.312937 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-28 01:13:08.312945 | orchestrator | Saturday 28 February 2026 01:09:26 +0000 (0:00:07.070) 0:01:52.848 ***** 2026-02-28 01:13:08.312952 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.312960 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.312967 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.312974 | orchestrator | 2026-02-28 01:13:08.312981 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-28 01:13:08.312989 | orchestrator | Saturday 28 February 2026 01:09:31 +0000 (0:00:04.212) 0:01:57.061 ***** 2026-02-28 01:13:08.312996 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.313003 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.313011 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.313018 | orchestrator | 2026-02-28 01:13:08.313026 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-28 01:13:08.313034 | orchestrator | Saturday 28 February 2026 01:09:31 +0000 (0:00:00.424) 0:01:57.485 ***** 2026-02-28 01:13:08.313041 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:13:08.313049 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.313057 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:13:08.313064 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.313072 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-28 01:13:08.313079 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.313087 | orchestrator | 2026-02-28 01:13:08.313099 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-28 01:13:08.313107 | orchestrator | Saturday 28 February 2026 01:09:36 +0000 (0:00:04.948) 0:02:02.433 ***** 2026-02-28 01:13:08.313114 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:08.313122 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:08.313129 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313137 | orchestrator | 2026-02-28 01:13:08.313144 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-28 01:13:08.313151 | orchestrator | Saturday 28 February 2026 01:09:43 +0000 (0:00:07.105) 0:02:09.539 ***** 2026-02-28 01:13:08.313158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.313178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.313192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-28 01:13:08.313204 | orchestrator | 2026-02-28 01:13:08.313211 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-28 01:13:08.313218 | orchestrator | Saturday 28 February 2026 01:09:48 +0000 (0:00:04.970) 0:02:14.510 ***** 2026-02-28 01:13:08.313226 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:08.313233 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:08.313240 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:08.313247 | orchestrator | 2026-02-28 01:13:08.313254 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-28 01:13:08.313262 | orchestrator | Saturday 28 February 2026 01:09:48 +0000 (0:00:00.344) 0:02:14.855 ***** 2026-02-28 01:13:08.313269 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313276 | orchestrator | 2026-02-28 01:13:08.313284 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-28 01:13:08.313291 | orchestrator | Saturday 28 February 2026 01:09:51 +0000 (0:00:02.322) 0:02:17.177 ***** 2026-02-28 01:13:08.313298 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313306 | orchestrator | 2026-02-28 01:13:08.313313 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-28 01:13:08.313321 | orchestrator | Saturday 28 February 2026 01:09:53 +0000 (0:00:02.565) 0:02:19.743 ***** 2026-02-28 01:13:08.313329 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313336 | orchestrator | 2026-02-28 01:13:08.313343 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-28 01:13:08.313351 | orchestrator | Saturday 28 February 2026 01:09:56 +0000 (0:00:02.458) 0:02:22.201 ***** 2026-02-28 01:13:08.313357 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313365 | orchestrator | 2026-02-28 01:13:08.313372 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-28 01:13:08.313379 | orchestrator | Saturday 28 February 2026 01:10:27 +0000 (0:00:31.379) 0:02:53.581 ***** 2026-02-28 01:13:08.313386 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313393 | orchestrator | 2026-02-28 01:13:08.313401 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:13:08.313408 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:03.103) 0:02:56.684 ***** 2026-02-28 01:13:08.313415 | orchestrator | 2026-02-28 01:13:08.313427 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:13:08.313434 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.071) 0:02:56.755 ***** 2026-02-28 01:13:08.313441 | orchestrator | 2026-02-28 01:13:08.313449 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-28 01:13:08.313456 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.067) 0:02:56.822 ***** 2026-02-28 01:13:08.313463 | orchestrator | 2026-02-28 01:13:08.313471 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-28 01:13:08.313478 | orchestrator | Saturday 28 February 2026 01:10:30 +0000 (0:00:00.077) 0:02:56.900 ***** 2026-02-28 01:13:08.313485 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:08.313493 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:08.313500 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:08.313507 | orchestrator | 2026-02-28 01:13:08.313514 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:08.313522 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-28 01:13:08.313530 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:13:08.313537 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-28 01:13:08.313552 | orchestrator | 2026-02-28 01:13:08.313603 | orchestrator | 2026-02-28 01:13:08.313611 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:08.313622 | orchestrator | Saturday 28 February 2026 01:11:10 +0000 (0:00:39.220) 0:03:36.121 ***** 2026-02-28 01:13:08.313630 | orchestrator | =============================================================================== 2026-02-28 01:13:08.313637 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.22s 2026-02-28 01:13:08.313645 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.38s 2026-02-28 01:13:08.313650 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 14.40s 2026-02-28 01:13:08.313656 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.48s 2026-02-28 01:13:08.313662 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.39s 2026-02-28 01:13:08.313668 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.11s 2026-02-28 01:13:08.313675 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.07s 2026-02-28 01:13:08.313682 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.55s 2026-02-28 01:13:08.313689 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.52s 2026-02-28 01:13:08.313697 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.51s 2026-02-28 01:13:08.313704 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.93s 2026-02-28 01:13:08.313711 | orchestrator | glance : Copying over config.json files for services -------------------- 5.90s 2026-02-28 01:13:08.313718 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.80s 2026-02-28 01:13:08.313725 | orchestrator | glance : Check glance containers ---------------------------------------- 4.97s 2026-02-28 01:13:08.313733 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.95s 2026-02-28 01:13:08.313740 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.74s 2026-02-28 01:13:08.313747 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.21s 2026-02-28 01:13:08.313754 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.02s 2026-02-28 01:13:08.313762 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.01s 2026-02-28 01:13:08.313769 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.91s 2026-02-28 01:13:11.356904 | orchestrator | 2026-02-28 01:13:11.358009 | orchestrator | 2026-02-28 01:13:11.358279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:11.358300 | orchestrator | 2026-02-28 01:13:11.358312 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:11.358323 | orchestrator | Saturday 28 February 2026 01:07:51 +0000 (0:00:00.331) 0:00:00.331 ***** 2026-02-28 01:13:11.358335 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:11.358348 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:11.358359 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:11.358370 | orchestrator | 2026-02-28 01:13:11.358382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:11.358394 | orchestrator | Saturday 28 February 2026 01:07:52 +0000 (0:00:00.376) 0:00:00.707 ***** 2026-02-28 01:13:11.358415 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-28 01:13:11.358477 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-28 01:13:11.358497 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-28 01:13:11.358517 | orchestrator | 2026-02-28 01:13:11.358534 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-28 01:13:11.358547 | orchestrator | 2026-02-28 01:13:11.358626 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:13:11.358659 | orchestrator | Saturday 28 February 2026 01:07:52 +0000 (0:00:00.526) 0:00:01.233 ***** 2026-02-28 01:13:11.358716 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:11.358737 | orchestrator | 2026-02-28 01:13:11.358754 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-28 01:13:11.358774 | orchestrator | Saturday 28 February 2026 01:07:53 +0000 (0:00:00.676) 0:00:01.910 ***** 2026-02-28 01:13:11.358794 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-28 01:13:11.358811 | orchestrator | 2026-02-28 01:13:11.358830 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-28 01:13:11.358850 | orchestrator | Saturday 28 February 2026 01:07:57 +0000 (0:00:04.273) 0:00:06.184 ***** 2026-02-28 01:13:11.358872 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-28 01:13:11.358892 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-28 01:13:11.358913 | orchestrator | 2026-02-28 01:13:11.358926 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-28 01:13:11.358937 | orchestrator | Saturday 28 February 2026 01:08:03 +0000 (0:00:05.353) 0:00:11.537 ***** 2026-02-28 01:13:11.358948 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:13:11.358959 | orchestrator | 2026-02-28 01:13:11.358971 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-28 01:13:11.358982 | orchestrator | Saturday 28 February 2026 01:08:05 +0000 (0:00:02.735) 0:00:14.273 ***** 2026-02-28 01:13:11.359069 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-28 01:13:11.359082 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:13:11.359094 | orchestrator | 2026-02-28 01:13:11.359105 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-28 01:13:11.359116 | orchestrator | Saturday 28 February 2026 01:08:09 +0000 (0:00:03.659) 0:00:17.932 ***** 2026-02-28 01:13:11.359143 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:13:11.359155 | orchestrator | 2026-02-28 01:13:11.359167 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-28 01:13:11.359178 | orchestrator | Saturday 28 February 2026 01:08:13 +0000 (0:00:04.004) 0:00:21.937 ***** 2026-02-28 01:13:11.359189 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-28 01:13:11.359200 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-28 01:13:11.359211 | orchestrator | 2026-02-28 01:13:11.359222 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-28 01:13:11.359233 | orchestrator | Saturday 28 February 2026 01:08:22 +0000 (0:00:09.399) 0:00:31.336 ***** 2026-02-28 01:13:11.359248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.359833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.359881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.359925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.359948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.359976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.359990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.360028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.360041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.360053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.360070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.360082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.360094 | orchestrator | 2026-02-28 01:13:11.360105 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:13:11.360117 | orchestrator | Saturday 28 February 2026 01:08:25 +0000 (0:00:02.705) 0:00:34.042 ***** 2026-02-28 01:13:11.360261 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.360277 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.360288 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.360309 | orchestrator | 2026-02-28 01:13:11.360321 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:13:11.360332 | orchestrator | Saturday 28 February 2026 01:08:25 +0000 (0:00:00.320) 0:00:34.363 ***** 2026-02-28 01:13:11.360344 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:11.360355 | orchestrator | 2026-02-28 01:13:11.360366 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-28 01:13:11.360378 | orchestrator | Saturday 28 February 2026 01:08:26 +0000 (0:00:00.684) 0:00:35.047 ***** 2026-02-28 01:13:11.360402 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-28 01:13:11.360415 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-28 01:13:11.360426 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-28 01:13:11.360437 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-28 01:13:11.360448 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-28 01:13:11.360459 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-28 01:13:11.360470 | orchestrator | 2026-02-28 01:13:11.360482 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-28 01:13:11.360493 | orchestrator | Saturday 28 February 2026 01:08:28 +0000 (0:00:02.229) 0:00:37.276 ***** 2026-02-28 01:13:11.360506 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:13:11.360519 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:13:11.360537 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:13:11.360549 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:13:11.360609 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:13:11.360623 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-28 01:13:11.360635 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:13:11.360653 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:13:11.360665 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:13:11.360692 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:13:11.360706 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:13:11.360718 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-28 01:13:11.360729 | orchestrator | 2026-02-28 01:13:11.360740 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-28 01:13:11.360752 | orchestrator | Saturday 28 February 2026 01:08:35 +0000 (0:00:06.308) 0:00:43.585 ***** 2026-02-28 01:13:11.360763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:11.360775 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:11.360786 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-28 01:13:11.360798 | orchestrator | 2026-02-28 01:13:11.360809 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-28 01:13:11.360820 | orchestrator | Saturday 28 February 2026 01:08:38 +0000 (0:00:03.219) 0:00:46.804 ***** 2026-02-28 01:13:11.360832 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-28 01:13:11.360848 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-28 01:13:11.360860 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-28 01:13:11.360878 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:13:11.360889 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:13:11.360902 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-28 01:13:11.360915 | orchestrator | 2026-02-28 01:13:11.360928 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-28 01:13:11.360941 | orchestrator | Saturday 28 February 2026 01:08:42 +0000 (0:00:03.732) 0:00:50.537 ***** 2026-02-28 01:13:11.360954 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-28 01:13:11.360967 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-28 01:13:11.360979 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-28 01:13:11.360992 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-28 01:13:11.361004 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-28 01:13:11.361017 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-28 01:13:11.361029 | orchestrator | 2026-02-28 01:13:11.361042 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-28 01:13:11.361055 | orchestrator | Saturday 28 February 2026 01:08:43 +0000 (0:00:01.901) 0:00:52.438 ***** 2026-02-28 01:13:11.361066 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.361077 | orchestrator | 2026-02-28 01:13:11.361089 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-28 01:13:11.361100 | orchestrator | Saturday 28 February 2026 01:08:44 +0000 (0:00:00.543) 0:00:52.981 ***** 2026-02-28 01:13:11.361111 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.361123 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.361134 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.361145 | orchestrator | 2026-02-28 01:13:11.361157 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:13:11.361168 | orchestrator | Saturday 28 February 2026 01:08:45 +0000 (0:00:00.817) 0:00:53.798 ***** 2026-02-28 01:13:11.361179 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:11.361190 | orchestrator | 2026-02-28 01:13:11.361201 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-28 01:13:11.361218 | orchestrator | Saturday 28 February 2026 01:08:46 +0000 (0:00:01.347) 0:00:55.146 ***** 2026-02-28 01:13:11.361231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.361243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.361275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.361288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/ci2026-02-28 01:13:11 | INFO  | Task b7d7495a-4a67-41be-bd17-f96aee22b42c is in state SUCCESS 2026-02-28 01:13:11.361425 | orchestrator | nder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.361437 | orchestrator | 2026-02-28 01:13:11.361449 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-28 01:13:11.361460 | orchestrator | Saturday 28 February 2026 01:08:52 +0000 (0:00:05.387) 0:01:00.534 ***** 2026-02-28 01:13:11.361472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.361490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361530 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.361549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.361604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.361644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361679 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.361699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361729 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.361741 | orchestrator | 2026-02-28 01:13:11.361752 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-28 01:13:11.361763 | orchestrator | Saturday 28 February 2026 01:08:54 +0000 (0:00:02.446) 0:01:02.980 ***** 2026-02-28 01:13:11.361780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.361792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361834 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.361846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.361866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361906 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.361925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.361937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.361980 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.361991 | orchestrator | 2026-02-28 01:13:11.362002 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-28 01:13:11.362013 | orchestrator | Saturday 28 February 2026 01:08:59 +0000 (0:00:05.241) 0:01:08.222 ***** 2026-02-28 01:13:11.362084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.362097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.362116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.362152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362278 | orchestrator | 2026-02-28 01:13:11.362290 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-28 01:13:11.362302 | orchestrator | Saturday 28 February 2026 01:09:05 +0000 (0:00:06.239) 0:01:14.462 ***** 2026-02-28 01:13:11.362313 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-28 01:13:11.362325 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-28 01:13:11.362336 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-28 01:13:11.362347 | orchestrator | 2026-02-28 01:13:11.362358 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-28 01:13:11.362369 | orchestrator | Saturday 28 February 2026 01:09:07 +0000 (0:00:01.745) 0:01:16.208 ***** 2026-02-28 01:13:11.362387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.362408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.362421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.362438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.362612 | orchestrator | 2026-02-28 01:13:11.362632 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-28 01:13:11.362652 | orchestrator | Saturday 28 February 2026 01:09:28 +0000 (0:00:20.831) 0:01:37.039 ***** 2026-02-28 01:13:11.362671 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:11.362690 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:11.362702 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.362714 | orchestrator | 2026-02-28 01:13:11.362732 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-28 01:13:11.362745 | orchestrator | Saturday 28 February 2026 01:09:30 +0000 (0:00:02.442) 0:01:39.482 ***** 2026-02-28 01:13:11.362757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.362770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362822 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.362834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.362854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362891 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.362910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-28 01:13:11.362942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-28 01:13:11.362986 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.362998 | orchestrator | 2026-02-28 01:13:11.363009 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-28 01:13:11.363020 | orchestrator | Saturday 28 February 2026 01:09:31 +0000 (0:00:00.768) 0:01:40.250 ***** 2026-02-28 01:13:11.363033 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.363044 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.363056 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.363067 | orchestrator | 2026-02-28 01:13:11.363079 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-28 01:13:11.363091 | orchestrator | Saturday 28 February 2026 01:09:32 +0000 (0:00:00.512) 0:01:40.763 ***** 2026-02-28 01:13:11.363102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.363121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.363141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-28 01:13:11.363163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-28 01:13:11.363298 | orchestrator | 2026-02-28 01:13:11.363310 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-28 01:13:11.363321 | orchestrator | Saturday 28 February 2026 01:09:36 +0000 (0:00:04.029) 0:01:44.793 ***** 2026-02-28 01:13:11.363333 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.363351 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:11.363362 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:11.363373 | orchestrator | 2026-02-28 01:13:11.363385 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-28 01:13:11.363397 | orchestrator | Saturday 28 February 2026 01:09:37 +0000 (0:00:00.967) 0:01:45.761 ***** 2026-02-28 01:13:11.363408 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363418 | orchestrator | 2026-02-28 01:13:11.363430 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-28 01:13:11.363441 | orchestrator | Saturday 28 February 2026 01:09:39 +0000 (0:00:02.695) 0:01:48.456 ***** 2026-02-28 01:13:11.363453 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363464 | orchestrator | 2026-02-28 01:13:11.363475 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-28 01:13:11.363491 | orchestrator | Saturday 28 February 2026 01:09:42 +0000 (0:00:02.961) 0:01:51.418 ***** 2026-02-28 01:13:11.363503 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363514 | orchestrator | 2026-02-28 01:13:11.363525 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:13:11.363536 | orchestrator | Saturday 28 February 2026 01:10:03 +0000 (0:00:20.681) 0:02:12.099 ***** 2026-02-28 01:13:11.363547 | orchestrator | 2026-02-28 01:13:11.363584 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:13:11.363598 | orchestrator | Saturday 28 February 2026 01:10:03 +0000 (0:00:00.079) 0:02:12.178 ***** 2026-02-28 01:13:11.363609 | orchestrator | 2026-02-28 01:13:11.363620 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-28 01:13:11.363632 | orchestrator | Saturday 28 February 2026 01:10:03 +0000 (0:00:00.086) 0:02:12.264 ***** 2026-02-28 01:13:11.363643 | orchestrator | 2026-02-28 01:13:11.363654 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-28 01:13:11.363665 | orchestrator | Saturday 28 February 2026 01:10:03 +0000 (0:00:00.077) 0:02:12.342 ***** 2026-02-28 01:13:11.363676 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363687 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:11.363698 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:11.363710 | orchestrator | 2026-02-28 01:13:11.363721 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-28 01:13:11.363732 | orchestrator | Saturday 28 February 2026 01:10:31 +0000 (0:00:27.969) 0:02:40.311 ***** 2026-02-28 01:13:11.363742 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:11.363754 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363765 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:11.363776 | orchestrator | 2026-02-28 01:13:11.363787 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-28 01:13:11.363798 | orchestrator | Saturday 28 February 2026 01:10:44 +0000 (0:00:12.962) 0:02:53.273 ***** 2026-02-28 01:13:11.363809 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363820 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:11.363831 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:11.363842 | orchestrator | 2026-02-28 01:13:11.363853 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-28 01:13:11.363864 | orchestrator | Saturday 28 February 2026 01:11:12 +0000 (0:00:27.512) 0:03:20.786 ***** 2026-02-28 01:13:11.363875 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:11.363886 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:11.363897 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:11.363908 | orchestrator | 2026-02-28 01:13:11.363927 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-28 01:13:11.363939 | orchestrator | Saturday 28 February 2026 01:11:24 +0000 (0:00:12.380) 0:03:33.167 ***** 2026-02-28 01:13:11.363950 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:11.363961 | orchestrator | 2026-02-28 01:13:11.363972 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:11.363984 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-28 01:13:11.364005 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:11.364016 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:11.364028 | orchestrator | 2026-02-28 01:13:11.364039 | orchestrator | 2026-02-28 01:13:11.364050 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:11.364062 | orchestrator | Saturday 28 February 2026 01:11:24 +0000 (0:00:00.311) 0:03:33.478 ***** 2026-02-28 01:13:11.364073 | orchestrator | =============================================================================== 2026-02-28 01:13:11.364084 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.97s 2026-02-28 01:13:11.364096 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.51s 2026-02-28 01:13:11.364107 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 20.83s 2026-02-28 01:13:11.364118 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.68s 2026-02-28 01:13:11.364129 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.96s 2026-02-28 01:13:11.364140 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.38s 2026-02-28 01:13:11.364151 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.40s 2026-02-28 01:13:11.364162 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.31s 2026-02-28 01:13:11.364173 | orchestrator | cinder : Copying over config.json files for services -------------------- 6.24s 2026-02-28 01:13:11.364184 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.39s 2026-02-28 01:13:11.364195 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.35s 2026-02-28 01:13:11.364206 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 5.24s 2026-02-28 01:13:11.364218 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.27s 2026-02-28 01:13:11.364229 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.03s 2026-02-28 01:13:11.364240 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.00s 2026-02-28 01:13:11.364251 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.73s 2026-02-28 01:13:11.364262 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.66s 2026-02-28 01:13:11.364280 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.22s 2026-02-28 01:13:11.364292 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.96s 2026-02-28 01:13:11.364303 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.74s 2026-02-28 01:13:11.364315 | orchestrator | 2026-02-28 01:13:11 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:13:11.364327 | orchestrator | 2026-02-28 01:13:11 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:11.364338 | orchestrator | 2026-02-28 01:13:11 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state STARTED 2026-02-28 01:13:11.364349 | orchestrator | 2026-02-28 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:14.401783 | orchestrator | 2026-02-28 01:13:14 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:13:14.404431 | orchestrator | 2026-02-28 01:13:14 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:14.405542 | orchestrator | 2026-02-28 01:13:14 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state STARTED 2026-02-28 01:13:14.406172 | orchestrator | 2026-02-28 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:17.445541 | orchestrator | 2026-02-28 01:13:17 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state STARTED 2026-02-28 01:13:17.447354 | orchestrator | 2026-02-28 01:13:17 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:17.450505 | orchestrator | 2026-02-28 01:13:17 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state STARTED 2026-02-28 01:13:17.450540 | orchestrator | 2026-02-28 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:20.489958 | orchestrator | 2026-02-28 01:13:20 | INFO  | Task 85df7d9a-250f-4186-a594-4921e6559fe9 is in state SUCCESS 2026-02-28 01:13:20.490325 | orchestrator | 2026-02-28 01:13:20 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:20.491753 | orchestrator | 2026-02-28 01:13:20 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state STARTED 2026-02-28 01:13:20.491792 | orchestrator | 2026-02-28 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:23.536418 | orchestrator | 2026-02-28 01:13:23 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:23.536891 | orchestrator | 2026-02-28 01:13:23 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:23.538494 | orchestrator | 2026-02-28 01:13:23 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state STARTED 2026-02-28 01:13:23.538708 | orchestrator | 2026-02-28 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:26.576154 | orchestrator | 2026-02-28 01:13:26 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:26.577017 | orchestrator | 2026-02-28 01:13:26 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:26.579607 | orchestrator | 2026-02-28 01:13:26 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state STARTED 2026-02-28 01:13:26.579662 | orchestrator | 2026-02-28 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:29.618180 | orchestrator | 2026-02-28 01:13:29 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:29.620062 | orchestrator | 2026-02-28 01:13:29 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:29.623662 | orchestrator | 2026-02-28 01:13:29 | INFO  | Task 234e9b92-583a-4ee6-9aa3-775e522f1257 is in state SUCCESS 2026-02-28 01:13:29.625101 | orchestrator | 2026-02-28 01:13:29.625141 | orchestrator | 2026-02-28 01:13:29.625151 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:29.625159 | orchestrator | 2026-02-28 01:13:29.625165 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:29.625173 | orchestrator | Saturday 28 February 2026 01:09:52 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-02-28 01:13:29.625180 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:29.625188 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:29.625196 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:29.625202 | orchestrator | 2026-02-28 01:13:29.625209 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:29.625215 | orchestrator | Saturday 28 February 2026 01:09:52 +0000 (0:00:00.355) 0:00:00.582 ***** 2026-02-28 01:13:29.625221 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-28 01:13:29.625229 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-28 01:13:29.625235 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-28 01:13:29.625242 | orchestrator | 2026-02-28 01:13:29.625248 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-28 01:13:29.625279 | orchestrator | 2026-02-28 01:13:29.625299 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-28 01:13:29.625306 | orchestrator | Saturday 28 February 2026 01:09:53 +0000 (0:00:00.744) 0:00:01.327 ***** 2026-02-28 01:13:29.625313 | orchestrator | 2026-02-28 01:13:29.625319 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-28 01:13:29.625326 | orchestrator | 2026-02-28 01:13:29.625333 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-28 01:13:29.625341 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:29.625347 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:29.625354 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:29.625360 | orchestrator | 2026-02-28 01:13:29.625366 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:29.625374 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:13:29.625382 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:13:29.625388 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:13:29.625394 | orchestrator | 2026-02-28 01:13:29.625399 | orchestrator | 2026-02-28 01:13:29.625405 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:29.625411 | orchestrator | Saturday 28 February 2026 01:13:19 +0000 (0:03:26.876) 0:03:28.203 ***** 2026-02-28 01:13:29.625417 | orchestrator | =============================================================================== 2026-02-28 01:13:29.625423 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 206.88s 2026-02-28 01:13:29.625429 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-02-28 01:13:29.625434 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-02-28 01:13:29.625440 | orchestrator | 2026-02-28 01:13:29.625446 | orchestrator | 2026-02-28 01:13:29.625452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:13:29.625458 | orchestrator | 2026-02-28 01:13:29.625464 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:13:29.625470 | orchestrator | Saturday 28 February 2026 01:11:16 +0000 (0:00:00.336) 0:00:00.336 ***** 2026-02-28 01:13:29.625476 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:29.625483 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:29.625489 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:29.625495 | orchestrator | 2026-02-28 01:13:29.625502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:13:29.625509 | orchestrator | Saturday 28 February 2026 01:11:16 +0000 (0:00:00.354) 0:00:00.691 ***** 2026-02-28 01:13:29.625515 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-28 01:13:29.625522 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-28 01:13:29.625528 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-28 01:13:29.625534 | orchestrator | 2026-02-28 01:13:29.625540 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-28 01:13:29.625546 | orchestrator | 2026-02-28 01:13:29.625604 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-28 01:13:29.625610 | orchestrator | Saturday 28 February 2026 01:11:17 +0000 (0:00:00.491) 0:00:01.182 ***** 2026-02-28 01:13:29.625616 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:29.625622 | orchestrator | 2026-02-28 01:13:29.625627 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-28 01:13:29.625633 | orchestrator | Saturday 28 February 2026 01:11:17 +0000 (0:00:00.608) 0:00:01.790 ***** 2026-02-28 01:13:29.625642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625697 | orchestrator | 2026-02-28 01:13:29.625703 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-28 01:13:29.625710 | orchestrator | Saturday 28 February 2026 01:11:18 +0000 (0:00:00.847) 0:00:02.638 ***** 2026-02-28 01:13:29.625718 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-28 01:13:29.625726 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-28 01:13:29.625732 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:13:29.625740 | orchestrator | 2026-02-28 01:13:29.625746 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-28 01:13:29.625752 | orchestrator | Saturday 28 February 2026 01:11:19 +0000 (0:00:01.044) 0:00:03.682 ***** 2026-02-28 01:13:29.625758 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:13:29.625765 | orchestrator | 2026-02-28 01:13:29.625771 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-28 01:13:29.625777 | orchestrator | Saturday 28 February 2026 01:11:20 +0000 (0:00:00.806) 0:00:04.489 ***** 2026-02-28 01:13:29.625784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625815 | orchestrator | 2026-02-28 01:13:29.625821 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-28 01:13:29.625827 | orchestrator | Saturday 28 February 2026 01:11:21 +0000 (0:00:01.512) 0:00:06.001 ***** 2026-02-28 01:13:29.625837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:13:29.625844 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.625851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:13:29.625857 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.625864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:13:29.625872 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.625877 | orchestrator | 2026-02-28 01:13:29.625883 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-28 01:13:29.625888 | orchestrator | Saturday 28 February 2026 01:11:22 +0000 (0:00:00.418) 0:00:06.420 ***** 2026-02-28 01:13:29.625894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:13:29.625904 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.625909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:13:29.625916 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.625928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-28 01:13:29.625934 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.625940 | orchestrator | 2026-02-28 01:13:29.625946 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-28 01:13:29.625953 | orchestrator | Saturday 28 February 2026 01:11:23 +0000 (0:00:01.114) 0:00:07.534 ***** 2026-02-28 01:13:29.625962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.625987 | orchestrator | 2026-02-28 01:13:29.625993 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-28 01:13:29.625999 | orchestrator | Saturday 28 February 2026 01:11:25 +0000 (0:00:01.597) 0:00:09.132 ***** 2026-02-28 01:13:29.626005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.626058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.626067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.626073 | orchestrator | 2026-02-28 01:13:29.626083 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-28 01:13:29.626089 | orchestrator | Saturday 28 February 2026 01:11:26 +0000 (0:00:01.445) 0:00:10.577 ***** 2026-02-28 01:13:29.626095 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.626102 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.626108 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.626114 | orchestrator | 2026-02-28 01:13:29.626120 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-28 01:13:29.626127 | orchestrator | Saturday 28 February 2026 01:11:27 +0000 (0:00:00.579) 0:00:11.157 ***** 2026-02-28 01:13:29.626135 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:13:29.626142 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:13:29.626149 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-28 01:13:29.626157 | orchestrator | 2026-02-28 01:13:29.626164 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-28 01:13:29.626171 | orchestrator | Saturday 28 February 2026 01:11:28 +0000 (0:00:01.315) 0:00:12.472 ***** 2026-02-28 01:13:29.626184 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:13:29.626192 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:13:29.626199 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-28 01:13:29.626205 | orchestrator | 2026-02-28 01:13:29.626212 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-28 01:13:29.626219 | orchestrator | Saturday 28 February 2026 01:11:29 +0000 (0:00:01.257) 0:00:13.730 ***** 2026-02-28 01:13:29.626228 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:13:29.626234 | orchestrator | 2026-02-28 01:13:29.626243 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-28 01:13:29.626251 | orchestrator | Saturday 28 February 2026 01:11:30 +0000 (0:00:00.909) 0:00:14.640 ***** 2026-02-28 01:13:29.626258 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-28 01:13:29.626265 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-28 01:13:29.626272 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:29.626279 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:13:29.626287 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:13:29.626294 | orchestrator | 2026-02-28 01:13:29.626301 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-28 01:13:29.626308 | orchestrator | Saturday 28 February 2026 01:11:31 +0000 (0:00:00.762) 0:00:15.402 ***** 2026-02-28 01:13:29.626315 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.626321 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.626329 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.626336 | orchestrator | 2026-02-28 01:13:29.626343 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-28 01:13:29.626350 | orchestrator | Saturday 28 February 2026 01:11:31 +0000 (0:00:00.607) 0:00:16.010 ***** 2026-02-28 01:13:29.626358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1104101, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1955886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1104101, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1955886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1104101, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1955886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1104137, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.20229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1104137, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.20229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1104137, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.20229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1104193, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2145033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1104193, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2145033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1104193, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2145033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1104127, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1998613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1104127, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1998613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1104127, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1998613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1104194, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2165456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1104194, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2165456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1104194, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2165456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1104111, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1970634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1104111, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1970634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1104111, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1970634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1104162, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2069023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1104162, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2069023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1104162, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2069023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1104185, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2119024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1104185, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2119024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1104185, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2119024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1104097, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.193902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1104097, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.193902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1104097, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.193902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1104107, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1964219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1104107, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1964219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1104107, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1964219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1104133, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2004943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1104133, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2004943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1104133, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2004943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1104171, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.208936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1104171, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.208936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1104171, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.208936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1104192, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2139022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1104192, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2139022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1104192, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2139022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1104121, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1995273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1104121, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1995273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1104121, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1995273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1104182, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2109022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1104182, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2109022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1104182, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2109022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1104200, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2169025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1104200, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2169025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1104200, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2169025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1104167, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2079022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1104167, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2079022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1104167, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2079022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1104161, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.205902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1104161, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.205902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1104161, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.205902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1104154, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2043908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1104154, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2043908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1104154, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2043908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1104176, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2099023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1104176, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2099023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1104176, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2099023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1104151, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1104151, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1104151, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.626989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1104190, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2129023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1104190, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2129023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1104190, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2129023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1104116, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1982033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1104116, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1982033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1104116, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.1982033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1104344, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2567234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1104344, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2567234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1104344, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2567234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1104235, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2349026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1104235, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2349026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1104235, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2349026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1104214, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2189023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1104214, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2189023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1104214, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2189023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1104266, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2369027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1104266, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2369027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1104266, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2369027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1104205, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.217618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1104205, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.217618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1104205, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.217618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1104305, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.246903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1104305, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.246903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1104305, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.246903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1104269, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2424264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1104269, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2424264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1104269, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2424264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1104311, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2479029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1104311, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2479029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1104311, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2479029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1104338, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.254903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1104338, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.254903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1104338, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.254903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1104300, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.24491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1104300, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.24491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1104300, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.24491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1104263, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2359028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1104263, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2359028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1104263, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2359028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1104231, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2269492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1104231, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2269492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1104231, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2269492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1104262, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2355626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1104262, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2355626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1104262, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2355626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1104217, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2239025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1104217, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2239025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1104217, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2239025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1104265, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2366138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1104265, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2366138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1104265, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2366138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1104323, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.254535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1104323, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.254535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1104323, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.254535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1104319, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.250903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1104319, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.250903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1104319, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.250903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1104207, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.218093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1104207, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.218093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1104207, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.218093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1104210, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2189023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1104210, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2189023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1104210, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2189023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1104293, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.243903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1104293, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.243903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1104293, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.243903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1104316, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2489028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1104316, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2489028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1104316, 'dev': 114, 'nlink': 1, 'atime': 1772236963.0, 'mtime': 1772236963.0, 'ctime': 1772237895.2489028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-28 01:13:29.627711 | orchestrator | 2026-02-28 01:13:29.627718 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-28 01:13:29.627725 | orchestrator | Saturday 28 February 2026 01:12:13 +0000 (0:00:41.201) 0:00:57.211 ***** 2026-02-28 01:13:29.627733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.627740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.627751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-28 01:13:29.627763 | orchestrator | 2026-02-28 01:13:29.627770 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-28 01:13:29.627776 | orchestrator | Saturday 28 February 2026 01:12:14 +0000 (0:00:01.039) 0:00:58.251 ***** 2026-02-28 01:13:29.627783 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:29.627790 | orchestrator | 2026-02-28 01:13:29.627797 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-28 01:13:29.627804 | orchestrator | Saturday 28 February 2026 01:12:16 +0000 (0:00:02.474) 0:01:00.726 ***** 2026-02-28 01:13:29.627811 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:29.627818 | orchestrator | 2026-02-28 01:13:29.627825 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:13:29.627832 | orchestrator | Saturday 28 February 2026 01:12:18 +0000 (0:00:02.281) 0:01:03.007 ***** 2026-02-28 01:13:29.627839 | orchestrator | 2026-02-28 01:13:29.627846 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:13:29.627853 | orchestrator | Saturday 28 February 2026 01:12:19 +0000 (0:00:00.086) 0:01:03.093 ***** 2026-02-28 01:13:29.627860 | orchestrator | 2026-02-28 01:13:29.627867 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-28 01:13:29.627873 | orchestrator | Saturday 28 February 2026 01:12:19 +0000 (0:00:00.282) 0:01:03.376 ***** 2026-02-28 01:13:29.627880 | orchestrator | 2026-02-28 01:13:29.627887 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-28 01:13:29.627894 | orchestrator | Saturday 28 February 2026 01:12:19 +0000 (0:00:00.072) 0:01:03.449 ***** 2026-02-28 01:13:29.627901 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.627908 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.627915 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:13:29.627921 | orchestrator | 2026-02-28 01:13:29.627928 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-28 01:13:29.627935 | orchestrator | Saturday 28 February 2026 01:12:21 +0000 (0:00:01.914) 0:01:05.363 ***** 2026-02-28 01:13:29.627941 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.627948 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.627955 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-28 01:13:29.627962 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-28 01:13:29.627969 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:29.627976 | orchestrator | 2026-02-28 01:13:29.627984 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-28 01:13:29.627994 | orchestrator | Saturday 28 February 2026 01:12:48 +0000 (0:00:26.713) 0:01:32.077 ***** 2026-02-28 01:13:29.628001 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.628008 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:13:29.628015 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:13:29.628022 | orchestrator | 2026-02-28 01:13:29.628029 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-28 01:13:29.628035 | orchestrator | Saturday 28 February 2026 01:13:21 +0000 (0:00:33.899) 0:02:05.976 ***** 2026-02-28 01:13:29.628042 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:13:29.628049 | orchestrator | 2026-02-28 01:13:29.628056 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-28 01:13:29.628063 | orchestrator | Saturday 28 February 2026 01:13:24 +0000 (0:00:02.502) 0:02:08.479 ***** 2026-02-28 01:13:29.628070 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.628077 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:13:29.628084 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:13:29.628091 | orchestrator | 2026-02-28 01:13:29.628098 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-28 01:13:29.628105 | orchestrator | Saturday 28 February 2026 01:13:24 +0000 (0:00:00.551) 0:02:09.030 ***** 2026-02-28 01:13:29.628119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-28 01:13:29.628128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-28 01:13:29.628136 | orchestrator | 2026-02-28 01:13:29.628143 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-28 01:13:29.628150 | orchestrator | Saturday 28 February 2026 01:13:27 +0000 (0:00:02.663) 0:02:11.693 ***** 2026-02-28 01:13:29.628157 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:13:29.628165 | orchestrator | 2026-02-28 01:13:29.628172 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:13:29.628178 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:29.628184 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:29.628194 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:13:29.628200 | orchestrator | 2026-02-28 01:13:29.628206 | orchestrator | 2026-02-28 01:13:29.628213 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:13:29.628220 | orchestrator | Saturday 28 February 2026 01:13:27 +0000 (0:00:00.286) 0:02:11.980 ***** 2026-02-28 01:13:29.628228 | orchestrator | =============================================================================== 2026-02-28 01:13:29.628235 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.20s 2026-02-28 01:13:29.628242 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.90s 2026-02-28 01:13:29.628250 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.71s 2026-02-28 01:13:29.628257 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.66s 2026-02-28 01:13:29.628264 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.50s 2026-02-28 01:13:29.628271 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2026-02-28 01:13:29.628278 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.28s 2026-02-28 01:13:29.628285 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.91s 2026-02-28 01:13:29.628292 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.60s 2026-02-28 01:13:29.628299 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.51s 2026-02-28 01:13:29.628305 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2026-02-28 01:13:29.628313 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2026-02-28 01:13:29.628320 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2026-02-28 01:13:29.628327 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.11s 2026-02-28 01:13:29.628334 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.04s 2026-02-28 01:13:29.628341 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2026-02-28 01:13:29.628348 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.91s 2026-02-28 01:13:29.628355 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.85s 2026-02-28 01:13:29.628362 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.81s 2026-02-28 01:13:29.628373 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.76s 2026-02-28 01:13:29.628384 | orchestrator | 2026-02-28 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:32.659344 | orchestrator | 2026-02-28 01:13:32 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:32.660645 | orchestrator | 2026-02-28 01:13:32 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:32.660694 | orchestrator | 2026-02-28 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:35.700236 | orchestrator | 2026-02-28 01:13:35 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:35.702148 | orchestrator | 2026-02-28 01:13:35 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:35.702599 | orchestrator | 2026-02-28 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:38.742297 | orchestrator | 2026-02-28 01:13:38 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:38.743296 | orchestrator | 2026-02-28 01:13:38 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:38.743332 | orchestrator | 2026-02-28 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:41.781359 | orchestrator | 2026-02-28 01:13:41 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:41.782147 | orchestrator | 2026-02-28 01:13:41 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:41.782786 | orchestrator | 2026-02-28 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:44.825830 | orchestrator | 2026-02-28 01:13:44 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:44.826529 | orchestrator | 2026-02-28 01:13:44 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:44.826791 | orchestrator | 2026-02-28 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:47.875170 | orchestrator | 2026-02-28 01:13:47 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:47.876452 | orchestrator | 2026-02-28 01:13:47 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:47.876686 | orchestrator | 2026-02-28 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:50.916792 | orchestrator | 2026-02-28 01:13:50 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:50.921255 | orchestrator | 2026-02-28 01:13:50 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:50.921323 | orchestrator | 2026-02-28 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:53.965027 | orchestrator | 2026-02-28 01:13:53 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:53.967002 | orchestrator | 2026-02-28 01:13:53 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:53.967324 | orchestrator | 2026-02-28 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:13:57.013147 | orchestrator | 2026-02-28 01:13:57 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:13:57.015248 | orchestrator | 2026-02-28 01:13:57 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:13:57.015305 | orchestrator | 2026-02-28 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:00.060951 | orchestrator | 2026-02-28 01:14:00 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:00.065092 | orchestrator | 2026-02-28 01:14:00 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:00.065201 | orchestrator | 2026-02-28 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:03.107276 | orchestrator | 2026-02-28 01:14:03 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:03.108888 | orchestrator | 2026-02-28 01:14:03 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:03.108963 | orchestrator | 2026-02-28 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:06.154753 | orchestrator | 2026-02-28 01:14:06 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:06.156116 | orchestrator | 2026-02-28 01:14:06 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:06.156167 | orchestrator | 2026-02-28 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:09.208248 | orchestrator | 2026-02-28 01:14:09 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:09.212438 | orchestrator | 2026-02-28 01:14:09 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:09.212521 | orchestrator | 2026-02-28 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:12.250259 | orchestrator | 2026-02-28 01:14:12 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:12.251697 | orchestrator | 2026-02-28 01:14:12 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:12.251761 | orchestrator | 2026-02-28 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:15.299108 | orchestrator | 2026-02-28 01:14:15 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:15.302775 | orchestrator | 2026-02-28 01:14:15 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:15.302869 | orchestrator | 2026-02-28 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:18.342855 | orchestrator | 2026-02-28 01:14:18 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:18.344002 | orchestrator | 2026-02-28 01:14:18 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:18.344049 | orchestrator | 2026-02-28 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:21.392336 | orchestrator | 2026-02-28 01:14:21 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:21.393080 | orchestrator | 2026-02-28 01:14:21 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:21.393756 | orchestrator | 2026-02-28 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:24.441853 | orchestrator | 2026-02-28 01:14:24 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:24.442714 | orchestrator | 2026-02-28 01:14:24 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:24.442846 | orchestrator | 2026-02-28 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:27.484719 | orchestrator | 2026-02-28 01:14:27 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:27.485864 | orchestrator | 2026-02-28 01:14:27 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:27.485982 | orchestrator | 2026-02-28 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:30.521200 | orchestrator | 2026-02-28 01:14:30 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:30.522899 | orchestrator | 2026-02-28 01:14:30 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:30.522972 | orchestrator | 2026-02-28 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:33.563331 | orchestrator | 2026-02-28 01:14:33 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:33.564576 | orchestrator | 2026-02-28 01:14:33 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:33.564620 | orchestrator | 2026-02-28 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:36.612635 | orchestrator | 2026-02-28 01:14:36 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:36.614324 | orchestrator | 2026-02-28 01:14:36 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:36.614374 | orchestrator | 2026-02-28 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:39.651051 | orchestrator | 2026-02-28 01:14:39 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:39.655037 | orchestrator | 2026-02-28 01:14:39 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:39.655119 | orchestrator | 2026-02-28 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:42.698273 | orchestrator | 2026-02-28 01:14:42 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:42.700769 | orchestrator | 2026-02-28 01:14:42 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:42.700827 | orchestrator | 2026-02-28 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:45.752296 | orchestrator | 2026-02-28 01:14:45 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:45.752757 | orchestrator | 2026-02-28 01:14:45 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:45.752774 | orchestrator | 2026-02-28 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:48.806148 | orchestrator | 2026-02-28 01:14:48 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:48.807442 | orchestrator | 2026-02-28 01:14:48 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:48.807544 | orchestrator | 2026-02-28 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:51.853966 | orchestrator | 2026-02-28 01:14:51 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:51.854965 | orchestrator | 2026-02-28 01:14:51 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:51.854991 | orchestrator | 2026-02-28 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:54.900361 | orchestrator | 2026-02-28 01:14:54 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:54.902345 | orchestrator | 2026-02-28 01:14:54 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:54.902560 | orchestrator | 2026-02-28 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:14:57.942071 | orchestrator | 2026-02-28 01:14:57 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:14:57.943439 | orchestrator | 2026-02-28 01:14:57 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:14:57.943482 | orchestrator | 2026-02-28 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:00.989092 | orchestrator | 2026-02-28 01:15:00 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:00.990405 | orchestrator | 2026-02-28 01:15:00 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:00.990581 | orchestrator | 2026-02-28 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:04.026935 | orchestrator | 2026-02-28 01:15:04 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:04.029467 | orchestrator | 2026-02-28 01:15:04 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:04.029557 | orchestrator | 2026-02-28 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:07.078985 | orchestrator | 2026-02-28 01:15:07 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:07.081185 | orchestrator | 2026-02-28 01:15:07 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:07.081269 | orchestrator | 2026-02-28 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:10.123775 | orchestrator | 2026-02-28 01:15:10 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:10.123880 | orchestrator | 2026-02-28 01:15:10 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:10.124669 | orchestrator | 2026-02-28 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:13.163380 | orchestrator | 2026-02-28 01:15:13 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:13.165190 | orchestrator | 2026-02-28 01:15:13 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:13.165231 | orchestrator | 2026-02-28 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:16.214130 | orchestrator | 2026-02-28 01:15:16 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:16.215736 | orchestrator | 2026-02-28 01:15:16 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:16.215819 | orchestrator | 2026-02-28 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:19.248451 | orchestrator | 2026-02-28 01:15:19 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:19.250452 | orchestrator | 2026-02-28 01:15:19 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:19.250564 | orchestrator | 2026-02-28 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:22.309926 | orchestrator | 2026-02-28 01:15:22 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:22.311679 | orchestrator | 2026-02-28 01:15:22 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:22.311709 | orchestrator | 2026-02-28 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:25.351205 | orchestrator | 2026-02-28 01:15:25 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:25.354105 | orchestrator | 2026-02-28 01:15:25 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:25.354183 | orchestrator | 2026-02-28 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:28.408054 | orchestrator | 2026-02-28 01:15:28 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:28.413525 | orchestrator | 2026-02-28 01:15:28 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:28.413595 | orchestrator | 2026-02-28 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:31.453565 | orchestrator | 2026-02-28 01:15:31 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:31.455720 | orchestrator | 2026-02-28 01:15:31 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:31.455792 | orchestrator | 2026-02-28 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:34.484663 | orchestrator | 2026-02-28 01:15:34 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:34.485492 | orchestrator | 2026-02-28 01:15:34 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:34.485539 | orchestrator | 2026-02-28 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:37.516219 | orchestrator | 2026-02-28 01:15:37 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:37.516602 | orchestrator | 2026-02-28 01:15:37 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:37.516631 | orchestrator | 2026-02-28 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:40.547745 | orchestrator | 2026-02-28 01:15:40 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:40.548384 | orchestrator | 2026-02-28 01:15:40 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:40.548416 | orchestrator | 2026-02-28 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:43.592687 | orchestrator | 2026-02-28 01:15:43 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:43.593900 | orchestrator | 2026-02-28 01:15:43 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:43.593937 | orchestrator | 2026-02-28 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:46.647663 | orchestrator | 2026-02-28 01:15:46 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:46.648706 | orchestrator | 2026-02-28 01:15:46 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:46.648738 | orchestrator | 2026-02-28 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:49.684200 | orchestrator | 2026-02-28 01:15:49 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:49.689624 | orchestrator | 2026-02-28 01:15:49 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:49.689708 | orchestrator | 2026-02-28 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:52.737687 | orchestrator | 2026-02-28 01:15:52 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:52.739298 | orchestrator | 2026-02-28 01:15:52 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:52.739368 | orchestrator | 2026-02-28 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:55.780015 | orchestrator | 2026-02-28 01:15:55 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:55.781152 | orchestrator | 2026-02-28 01:15:55 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:55.781234 | orchestrator | 2026-02-28 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:15:58.821432 | orchestrator | 2026-02-28 01:15:58 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:15:58.823098 | orchestrator | 2026-02-28 01:15:58 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:15:58.823163 | orchestrator | 2026-02-28 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:01.857282 | orchestrator | 2026-02-28 01:16:01 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:01.858582 | orchestrator | 2026-02-28 01:16:01 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:01.858615 | orchestrator | 2026-02-28 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:04.892169 | orchestrator | 2026-02-28 01:16:04 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:04.894771 | orchestrator | 2026-02-28 01:16:04 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:04.894822 | orchestrator | 2026-02-28 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:07.937326 | orchestrator | 2026-02-28 01:16:07 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:07.938050 | orchestrator | 2026-02-28 01:16:07 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:07.938068 | orchestrator | 2026-02-28 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:10.977692 | orchestrator | 2026-02-28 01:16:10 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:10.980558 | orchestrator | 2026-02-28 01:16:10 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:10.980621 | orchestrator | 2026-02-28 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:14.036008 | orchestrator | 2026-02-28 01:16:14 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:14.036745 | orchestrator | 2026-02-28 01:16:14 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:14.036786 | orchestrator | 2026-02-28 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:17.074820 | orchestrator | 2026-02-28 01:16:17 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:17.077232 | orchestrator | 2026-02-28 01:16:17 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:17.077292 | orchestrator | 2026-02-28 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:20.120024 | orchestrator | 2026-02-28 01:16:20 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:20.120705 | orchestrator | 2026-02-28 01:16:20 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:20.120824 | orchestrator | 2026-02-28 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:23.165567 | orchestrator | 2026-02-28 01:16:23 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:23.168293 | orchestrator | 2026-02-28 01:16:23 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:23.168375 | orchestrator | 2026-02-28 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:26.236848 | orchestrator | 2026-02-28 01:16:26 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:26.237100 | orchestrator | 2026-02-28 01:16:26 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:26.237122 | orchestrator | 2026-02-28 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:29.270840 | orchestrator | 2026-02-28 01:16:29 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:29.271825 | orchestrator | 2026-02-28 01:16:29 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:29.271879 | orchestrator | 2026-02-28 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:32.308899 | orchestrator | 2026-02-28 01:16:32 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:32.311554 | orchestrator | 2026-02-28 01:16:32 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:32.311691 | orchestrator | 2026-02-28 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:35.347134 | orchestrator | 2026-02-28 01:16:35 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:35.348316 | orchestrator | 2026-02-28 01:16:35 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:35.348396 | orchestrator | 2026-02-28 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:38.390262 | orchestrator | 2026-02-28 01:16:38 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:38.391123 | orchestrator | 2026-02-28 01:16:38 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:38.391159 | orchestrator | 2026-02-28 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:41.434962 | orchestrator | 2026-02-28 01:16:41 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:41.435612 | orchestrator | 2026-02-28 01:16:41 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:41.435655 | orchestrator | 2026-02-28 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:44.475874 | orchestrator | 2026-02-28 01:16:44 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:44.475962 | orchestrator | 2026-02-28 01:16:44 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:44.475973 | orchestrator | 2026-02-28 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:47.518100 | orchestrator | 2026-02-28 01:16:47 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:47.519276 | orchestrator | 2026-02-28 01:16:47 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:47.519801 | orchestrator | 2026-02-28 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:50.551699 | orchestrator | 2026-02-28 01:16:50 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:50.552881 | orchestrator | 2026-02-28 01:16:50 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:50.552978 | orchestrator | 2026-02-28 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:53.599824 | orchestrator | 2026-02-28 01:16:53 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:53.600785 | orchestrator | 2026-02-28 01:16:53 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:53.600861 | orchestrator | 2026-02-28 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:56.630333 | orchestrator | 2026-02-28 01:16:56 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:56.631961 | orchestrator | 2026-02-28 01:16:56 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:56.632113 | orchestrator | 2026-02-28 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:16:59.676887 | orchestrator | 2026-02-28 01:16:59 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:16:59.677659 | orchestrator | 2026-02-28 01:16:59 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:16:59.677718 | orchestrator | 2026-02-28 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:02.725459 | orchestrator | 2026-02-28 01:17:02 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:02.727549 | orchestrator | 2026-02-28 01:17:02 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:02.727622 | orchestrator | 2026-02-28 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:05.775919 | orchestrator | 2026-02-28 01:17:05 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:05.776015 | orchestrator | 2026-02-28 01:17:05 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:05.776027 | orchestrator | 2026-02-28 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:08.814858 | orchestrator | 2026-02-28 01:17:08 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:08.815814 | orchestrator | 2026-02-28 01:17:08 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:08.816222 | orchestrator | 2026-02-28 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:11.858168 | orchestrator | 2026-02-28 01:17:11 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:11.859204 | orchestrator | 2026-02-28 01:17:11 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:11.859506 | orchestrator | 2026-02-28 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:14.899040 | orchestrator | 2026-02-28 01:17:14 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:14.901281 | orchestrator | 2026-02-28 01:17:14 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:14.901300 | orchestrator | 2026-02-28 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:17.932501 | orchestrator | 2026-02-28 01:17:17 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:17.933883 | orchestrator | 2026-02-28 01:17:17 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:17.934256 | orchestrator | 2026-02-28 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:20.967575 | orchestrator | 2026-02-28 01:17:20 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:20.968446 | orchestrator | 2026-02-28 01:17:20 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:20.968519 | orchestrator | 2026-02-28 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:24.063734 | orchestrator | 2026-02-28 01:17:24 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:24.064510 | orchestrator | 2026-02-28 01:17:24 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:24.064569 | orchestrator | 2026-02-28 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:27.107425 | orchestrator | 2026-02-28 01:17:27 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:27.108043 | orchestrator | 2026-02-28 01:17:27 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:27.108078 | orchestrator | 2026-02-28 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:30.150577 | orchestrator | 2026-02-28 01:17:30 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:30.152085 | orchestrator | 2026-02-28 01:17:30 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:30.152122 | orchestrator | 2026-02-28 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:33.197124 | orchestrator | 2026-02-28 01:17:33 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:33.198518 | orchestrator | 2026-02-28 01:17:33 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:33.198726 | orchestrator | 2026-02-28 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:36.242365 | orchestrator | 2026-02-28 01:17:36 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:36.246297 | orchestrator | 2026-02-28 01:17:36 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:36.246404 | orchestrator | 2026-02-28 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:39.293013 | orchestrator | 2026-02-28 01:17:39 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:39.295284 | orchestrator | 2026-02-28 01:17:39 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:39.295338 | orchestrator | 2026-02-28 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:42.354207 | orchestrator | 2026-02-28 01:17:42 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:42.356628 | orchestrator | 2026-02-28 01:17:42 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:42.356748 | orchestrator | 2026-02-28 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:45.405185 | orchestrator | 2026-02-28 01:17:45 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:45.405650 | orchestrator | 2026-02-28 01:17:45 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:45.405977 | orchestrator | 2026-02-28 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:48.451229 | orchestrator | 2026-02-28 01:17:48 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:48.451783 | orchestrator | 2026-02-28 01:17:48 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:48.452128 | orchestrator | 2026-02-28 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:51.494209 | orchestrator | 2026-02-28 01:17:51 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:51.497871 | orchestrator | 2026-02-28 01:17:51 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:51.498072 | orchestrator | 2026-02-28 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:54.542883 | orchestrator | 2026-02-28 01:17:54 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:54.546325 | orchestrator | 2026-02-28 01:17:54 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:54.546414 | orchestrator | 2026-02-28 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:17:57.586304 | orchestrator | 2026-02-28 01:17:57 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:17:57.588297 | orchestrator | 2026-02-28 01:17:57 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:17:57.588336 | orchestrator | 2026-02-28 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:00.642462 | orchestrator | 2026-02-28 01:18:00 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:00.643451 | orchestrator | 2026-02-28 01:18:00 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state STARTED 2026-02-28 01:18:00.643501 | orchestrator | 2026-02-28 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:03.694882 | orchestrator | 2026-02-28 01:18:03 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:03.699675 | orchestrator | 2026-02-28 01:18:03 | INFO  | Task 6da0e39b-e5d7-4b8b-8c2d-103d1e0e25f2 is in state SUCCESS 2026-02-28 01:18:03.700057 | orchestrator | 2026-02-28 01:18:03.702053 | orchestrator | 2026-02-28 01:18:03.702088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:18:03.702095 | orchestrator | 2026-02-28 01:18:03.702099 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-28 01:18:03.702104 | orchestrator | Saturday 28 February 2026 01:08:30 +0000 (0:00:00.907) 0:00:00.907 ***** 2026-02-28 01:18:03.702109 | orchestrator | changed: [testbed-manager] 2026-02-28 01:18:03.702115 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702120 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.702124 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.702128 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.702133 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.702137 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.702142 | orchestrator | 2026-02-28 01:18:03.702167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:18:03.702172 | orchestrator | Saturday 28 February 2026 01:08:32 +0000 (0:00:02.235) 0:00:03.142 ***** 2026-02-28 01:18:03.702176 | orchestrator | changed: [testbed-manager] 2026-02-28 01:18:03.702180 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702191 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.702195 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.702200 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.702204 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.702212 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.702219 | orchestrator | 2026-02-28 01:18:03.702226 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:18:03.702236 | orchestrator | Saturday 28 February 2026 01:08:34 +0000 (0:00:01.283) 0:00:04.425 ***** 2026-02-28 01:18:03.702244 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-28 01:18:03.702281 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-28 01:18:03.702289 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-28 01:18:03.702296 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-28 01:18:03.702305 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-28 01:18:03.702314 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-28 01:18:03.702324 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-28 01:18:03.702333 | orchestrator | 2026-02-28 01:18:03.702342 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-28 01:18:03.702662 | orchestrator | 2026-02-28 01:18:03.702671 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-28 01:18:03.702698 | orchestrator | Saturday 28 February 2026 01:08:36 +0000 (0:00:02.427) 0:00:06.853 ***** 2026-02-28 01:18:03.702706 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.702713 | orchestrator | 2026-02-28 01:18:03.702719 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-28 01:18:03.702726 | orchestrator | Saturday 28 February 2026 01:08:37 +0000 (0:00:01.132) 0:00:07.985 ***** 2026-02-28 01:18:03.702734 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-28 01:18:03.702741 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-28 01:18:03.702748 | orchestrator | 2026-02-28 01:18:03.702756 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-28 01:18:03.702762 | orchestrator | Saturday 28 February 2026 01:08:42 +0000 (0:00:05.046) 0:00:13.032 ***** 2026-02-28 01:18:03.702769 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:18:03.702777 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-28 01:18:03.702783 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702790 | orchestrator | 2026-02-28 01:18:03.702797 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-28 01:18:03.702804 | orchestrator | Saturday 28 February 2026 01:08:47 +0000 (0:00:05.082) 0:00:18.115 ***** 2026-02-28 01:18:03.702810 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702817 | orchestrator | 2026-02-28 01:18:03.702824 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-28 01:18:03.702831 | orchestrator | Saturday 28 February 2026 01:08:48 +0000 (0:00:00.964) 0:00:19.080 ***** 2026-02-28 01:18:03.702837 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702844 | orchestrator | 2026-02-28 01:18:03.702851 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-28 01:18:03.702857 | orchestrator | Saturday 28 February 2026 01:08:50 +0000 (0:00:02.078) 0:00:21.158 ***** 2026-02-28 01:18:03.702864 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702871 | orchestrator | 2026-02-28 01:18:03.702877 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:18:03.702884 | orchestrator | Saturday 28 February 2026 01:08:58 +0000 (0:00:07.683) 0:00:28.842 ***** 2026-02-28 01:18:03.702890 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.702895 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.702901 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.702907 | orchestrator | 2026-02-28 01:18:03.702913 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-28 01:18:03.702919 | orchestrator | Saturday 28 February 2026 01:08:59 +0000 (0:00:01.333) 0:00:30.176 ***** 2026-02-28 01:18:03.702924 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.702930 | orchestrator | 2026-02-28 01:18:03.702936 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-28 01:18:03.702942 | orchestrator | Saturday 28 February 2026 01:09:36 +0000 (0:00:36.695) 0:01:06.871 ***** 2026-02-28 01:18:03.702948 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.702954 | orchestrator | 2026-02-28 01:18:03.702960 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-28 01:18:03.702966 | orchestrator | Saturday 28 February 2026 01:09:54 +0000 (0:00:18.212) 0:01:25.084 ***** 2026-02-28 01:18:03.702972 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.702978 | orchestrator | 2026-02-28 01:18:03.702984 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-28 01:18:03.702990 | orchestrator | Saturday 28 February 2026 01:10:09 +0000 (0:00:14.326) 0:01:39.410 ***** 2026-02-28 01:18:03.703009 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.703015 | orchestrator | 2026-02-28 01:18:03.703021 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-28 01:18:03.703027 | orchestrator | Saturday 28 February 2026 01:10:10 +0000 (0:00:01.312) 0:01:40.723 ***** 2026-02-28 01:18:03.703038 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.703044 | orchestrator | 2026-02-28 01:18:03.703050 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:18:03.703056 | orchestrator | Saturday 28 February 2026 01:10:11 +0000 (0:00:00.508) 0:01:41.231 ***** 2026-02-28 01:18:03.703062 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.703068 | orchestrator | 2026-02-28 01:18:03.703082 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-28 01:18:03.703088 | orchestrator | Saturday 28 February 2026 01:10:11 +0000 (0:00:00.565) 0:01:41.796 ***** 2026-02-28 01:18:03.703094 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.703100 | orchestrator | 2026-02-28 01:18:03.703106 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-28 01:18:03.703112 | orchestrator | Saturday 28 February 2026 01:10:33 +0000 (0:00:22.297) 0:02:04.094 ***** 2026-02-28 01:18:03.703117 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.703123 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703129 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703135 | orchestrator | 2026-02-28 01:18:03.703141 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-28 01:18:03.703147 | orchestrator | 2026-02-28 01:18:03.703153 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-28 01:18:03.703159 | orchestrator | Saturday 28 February 2026 01:10:34 +0000 (0:00:01.120) 0:02:05.214 ***** 2026-02-28 01:18:03.703165 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.703184 | orchestrator | 2026-02-28 01:18:03.703190 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-28 01:18:03.703200 | orchestrator | Saturday 28 February 2026 01:10:35 +0000 (0:00:00.867) 0:02:06.081 ***** 2026-02-28 01:18:03.703220 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703230 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703240 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.703249 | orchestrator | 2026-02-28 01:18:03.703258 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-28 01:18:03.703296 | orchestrator | Saturday 28 February 2026 01:10:38 +0000 (0:00:02.564) 0:02:08.646 ***** 2026-02-28 01:18:03.703316 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703325 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703332 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.703342 | orchestrator | 2026-02-28 01:18:03.703416 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-28 01:18:03.703426 | orchestrator | Saturday 28 February 2026 01:10:40 +0000 (0:00:02.461) 0:02:11.107 ***** 2026-02-28 01:18:03.703437 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.703463 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703473 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703492 | orchestrator | 2026-02-28 01:18:03.703502 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-28 01:18:03.703512 | orchestrator | Saturday 28 February 2026 01:10:41 +0000 (0:00:00.446) 0:02:11.553 ***** 2026-02-28 01:18:03.703522 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 01:18:03.703533 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703543 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 01:18:03.703553 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703564 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-28 01:18:03.703574 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-28 01:18:03.703584 | orchestrator | 2026-02-28 01:18:03.703595 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-28 01:18:03.703606 | orchestrator | Saturday 28 February 2026 01:10:51 +0000 (0:00:10.292) 0:02:21.846 ***** 2026-02-28 01:18:03.703615 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.703634 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703645 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703655 | orchestrator | 2026-02-28 01:18:03.703665 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-28 01:18:03.703675 | orchestrator | Saturday 28 February 2026 01:10:52 +0000 (0:00:00.534) 0:02:22.381 ***** 2026-02-28 01:18:03.703684 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-28 01:18:03.703695 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.703705 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-28 01:18:03.703714 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703724 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-28 01:18:03.703734 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703744 | orchestrator | 2026-02-28 01:18:03.703753 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-28 01:18:03.703762 | orchestrator | Saturday 28 February 2026 01:10:53 +0000 (0:00:01.338) 0:02:23.719 ***** 2026-02-28 01:18:03.703772 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703781 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703789 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.703798 | orchestrator | 2026-02-28 01:18:03.703806 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-28 01:18:03.703816 | orchestrator | Saturday 28 February 2026 01:10:54 +0000 (0:00:00.935) 0:02:24.655 ***** 2026-02-28 01:18:03.703826 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703837 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703846 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.703855 | orchestrator | 2026-02-28 01:18:03.703865 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-28 01:18:03.703875 | orchestrator | Saturday 28 February 2026 01:10:55 +0000 (0:00:01.120) 0:02:25.776 ***** 2026-02-28 01:18:03.703885 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703895 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703914 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.703925 | orchestrator | 2026-02-28 01:18:03.703935 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-28 01:18:03.703946 | orchestrator | Saturday 28 February 2026 01:10:58 +0000 (0:00:02.706) 0:02:28.482 ***** 2026-02-28 01:18:03.703955 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.703965 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.703975 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.703984 | orchestrator | 2026-02-28 01:18:03.703994 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-28 01:18:03.704004 | orchestrator | Saturday 28 February 2026 01:11:22 +0000 (0:00:24.533) 0:02:53.016 ***** 2026-02-28 01:18:03.704013 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704030 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704041 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.704051 | orchestrator | 2026-02-28 01:18:03.704061 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-28 01:18:03.704070 | orchestrator | Saturday 28 February 2026 01:11:37 +0000 (0:00:14.575) 0:03:07.592 ***** 2026-02-28 01:18:03.704080 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.704089 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704099 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704109 | orchestrator | 2026-02-28 01:18:03.704119 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-28 01:18:03.704129 | orchestrator | Saturday 28 February 2026 01:11:38 +0000 (0:00:01.175) 0:03:08.767 ***** 2026-02-28 01:18:03.704139 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704149 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704159 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.704168 | orchestrator | 2026-02-28 01:18:03.704178 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-28 01:18:03.704197 | orchestrator | Saturday 28 February 2026 01:11:51 +0000 (0:00:13.326) 0:03:22.093 ***** 2026-02-28 01:18:03.704207 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.704217 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704227 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704237 | orchestrator | 2026-02-28 01:18:03.704247 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-28 01:18:03.704257 | orchestrator | Saturday 28 February 2026 01:11:53 +0000 (0:00:01.334) 0:03:23.428 ***** 2026-02-28 01:18:03.704267 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.704276 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704286 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704296 | orchestrator | 2026-02-28 01:18:03.704306 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-28 01:18:03.704315 | orchestrator | 2026-02-28 01:18:03.704325 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:18:03.704335 | orchestrator | Saturday 28 February 2026 01:11:53 +0000 (0:00:00.671) 0:03:24.099 ***** 2026-02-28 01:18:03.704345 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.704376 | orchestrator | 2026-02-28 01:18:03.704387 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-28 01:18:03.704397 | orchestrator | Saturday 28 February 2026 01:11:54 +0000 (0:00:00.688) 0:03:24.787 ***** 2026-02-28 01:18:03.704407 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-28 01:18:03.704417 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-28 01:18:03.704427 | orchestrator | 2026-02-28 01:18:03.704438 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-28 01:18:03.704444 | orchestrator | Saturday 28 February 2026 01:11:58 +0000 (0:00:03.624) 0:03:28.412 ***** 2026-02-28 01:18:03.704451 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-28 01:18:03.704458 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-28 01:18:03.704464 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-28 01:18:03.704470 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-28 01:18:03.704476 | orchestrator | 2026-02-28 01:18:03.704482 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-28 01:18:03.704488 | orchestrator | Saturday 28 February 2026 01:12:05 +0000 (0:00:07.153) 0:03:35.565 ***** 2026-02-28 01:18:03.704494 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:18:03.704500 | orchestrator | 2026-02-28 01:18:03.704506 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-28 01:18:03.704512 | orchestrator | Saturday 28 February 2026 01:12:08 +0000 (0:00:03.408) 0:03:38.974 ***** 2026-02-28 01:18:03.704518 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-28 01:18:03.704524 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:18:03.704530 | orchestrator | 2026-02-28 01:18:03.704536 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-28 01:18:03.704542 | orchestrator | Saturday 28 February 2026 01:12:12 +0000 (0:00:04.224) 0:03:43.198 ***** 2026-02-28 01:18:03.704548 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:18:03.704553 | orchestrator | 2026-02-28 01:18:03.704559 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-28 01:18:03.704565 | orchestrator | Saturday 28 February 2026 01:12:16 +0000 (0:00:03.346) 0:03:46.545 ***** 2026-02-28 01:18:03.704571 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-28 01:18:03.704577 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-28 01:18:03.704588 | orchestrator | 2026-02-28 01:18:03.704594 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-28 01:18:03.704605 | orchestrator | Saturday 28 February 2026 01:12:24 +0000 (0:00:07.781) 0:03:54.327 ***** 2026-02-28 01:18:03.704621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.704632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.704640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.704659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.704670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.704677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.704683 | orchestrator | 2026-02-28 01:18:03.704689 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-28 01:18:03.704696 | orchestrator | Saturday 28 February 2026 01:12:25 +0000 (0:00:01.357) 0:03:55.685 ***** 2026-02-28 01:18:03.704702 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.704707 | orchestrator | 2026-02-28 01:18:03.704713 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-28 01:18:03.704719 | orchestrator | Saturday 28 February 2026 01:12:25 +0000 (0:00:00.129) 0:03:55.814 ***** 2026-02-28 01:18:03.704725 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.704731 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704737 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704743 | orchestrator | 2026-02-28 01:18:03.704749 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-28 01:18:03.704755 | orchestrator | Saturday 28 February 2026 01:12:25 +0000 (0:00:00.321) 0:03:56.136 ***** 2026-02-28 01:18:03.704760 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-28 01:18:03.704766 | orchestrator | 2026-02-28 01:18:03.704772 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-28 01:18:03.704778 | orchestrator | Saturday 28 February 2026 01:12:27 +0000 (0:00:01.094) 0:03:57.230 ***** 2026-02-28 01:18:03.704784 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.704790 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704796 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.704802 | orchestrator | 2026-02-28 01:18:03.704807 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-28 01:18:03.704813 | orchestrator | Saturday 28 February 2026 01:12:27 +0000 (0:00:00.337) 0:03:57.567 ***** 2026-02-28 01:18:03.704819 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.704825 | orchestrator | 2026-02-28 01:18:03.704831 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-28 01:18:03.704837 | orchestrator | Saturday 28 February 2026 01:12:27 +0000 (0:00:00.591) 0:03:58.159 ***** 2026-02-28 01:18:03.704843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.704863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.704870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.704877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.704888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.704899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.704905 | orchestrator | 2026-02-28 01:18:03.704911 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-28 01:18:03.704917 | orchestrator | Saturday 28 February 2026 01:12:30 +0000 (0:00:02.755) 0:04:00.914 ***** 2026-02-28 01:18:03.704931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.704938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.704944 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.704950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.704961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.704967 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.704982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.704989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.704996 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.705001 | orchestrator | 2026-02-28 01:18:03.705008 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-28 01:18:03.705014 | orchestrator | Saturday 28 February 2026 01:12:31 +0000 (0:00:00.720) 0:04:01.635 ***** 2026-02-28 01:18:03.705020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.705031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.705037 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.705383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.705397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.705403 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.705410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.705422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.705428 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.705434 | orchestrator | 2026-02-28 01:18:03.705440 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-28 01:18:03.705446 | orchestrator | Saturday 28 February 2026 01:12:32 +0000 (0:00:00.860) 0:04:02.495 ***** 2026-02-28 01:18:03.705461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705513 | orchestrator | 2026-02-28 01:18:03.705519 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-28 01:18:03.705524 | orchestrator | Saturday 28 February 2026 01:12:35 +0000 (0:00:02.761) 0:04:05.257 ***** 2026-02-28 01:18:03.705531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705589 | orchestrator | 2026-02-28 01:18:03.705595 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-28 01:18:03.705601 | orchestrator | Saturday 28 February 2026 01:12:41 +0000 (0:00:06.452) 0:04:11.710 ***** 2026-02-28 01:18:03.705607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.705616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.705623 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.705632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.705646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.705653 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.705659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-28 01:18:03.705666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.705672 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.705678 | orchestrator | 2026-02-28 01:18:03.705684 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-28 01:18:03.705690 | orchestrator | Saturday 28 February 2026 01:12:42 +0000 (0:00:00.692) 0:04:12.402 ***** 2026-02-28 01:18:03.705696 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.705702 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.705708 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.705713 | orchestrator | 2026-02-28 01:18:03.705722 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-28 01:18:03.705728 | orchestrator | Saturday 28 February 2026 01:12:43 +0000 (0:00:01.567) 0:04:13.970 ***** 2026-02-28 01:18:03.705734 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.705740 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.705746 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.705752 | orchestrator | 2026-02-28 01:18:03.705757 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-28 01:18:03.705763 | orchestrator | Saturday 28 February 2026 01:12:44 +0000 (0:00:00.397) 0:04:14.368 ***** 2026-02-28 01:18:03.705793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:03.705875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.705899 | orchestrator | 2026-02-28 01:18:03.705905 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-28 01:18:03.705911 | orchestrator | Saturday 28 February 2026 01:12:46 +0000 (0:00:02.261) 0:04:16.629 ***** 2026-02-28 01:18:03.705917 | orchestrator | 2026-02-28 01:18:03.705923 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-28 01:18:03.705929 | orchestrator | Saturday 28 February 2026 01:12:46 +0000 (0:00:00.134) 0:04:16.764 ***** 2026-02-28 01:18:03.705935 | orchestrator | 2026-02-28 01:18:03.705940 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-28 01:18:03.705946 | orchestrator | Saturday 28 February 2026 01:12:46 +0000 (0:00:00.134) 0:04:16.898 ***** 2026-02-28 01:18:03.705952 | orchestrator | 2026-02-28 01:18:03.705958 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-28 01:18:03.705964 | orchestrator | Saturday 28 February 2026 01:12:46 +0000 (0:00:00.134) 0:04:17.032 ***** 2026-02-28 01:18:03.705969 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.705975 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.705983 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.705990 | orchestrator | 2026-02-28 01:18:03.705997 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-28 01:18:03.706003 | orchestrator | Saturday 28 February 2026 01:13:10 +0000 (0:00:23.636) 0:04:40.668 ***** 2026-02-28 01:18:03.706010 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.706056 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.706064 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.706070 | orchestrator | 2026-02-28 01:18:03.706077 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-28 01:18:03.706085 | orchestrator | 2026-02-28 01:18:03.706091 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:18:03.706098 | orchestrator | Saturday 28 February 2026 01:13:22 +0000 (0:00:11.789) 0:04:52.458 ***** 2026-02-28 01:18:03.706106 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.706114 | orchestrator | 2026-02-28 01:18:03.706121 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:18:03.706128 | orchestrator | Saturday 28 February 2026 01:13:23 +0000 (0:00:01.524) 0:04:53.983 ***** 2026-02-28 01:18:03.706134 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.706141 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.706148 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.706155 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.706161 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.706168 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.706179 | orchestrator | 2026-02-28 01:18:03.706186 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-28 01:18:03.706193 | orchestrator | Saturday 28 February 2026 01:13:24 +0000 (0:00:00.829) 0:04:54.813 ***** 2026-02-28 01:18:03.706200 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.706207 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.706213 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.706220 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:18:03.706227 | orchestrator | 2026-02-28 01:18:03.706234 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-28 01:18:03.706286 | orchestrator | Saturday 28 February 2026 01:13:25 +0000 (0:00:01.337) 0:04:56.151 ***** 2026-02-28 01:18:03.706294 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-28 01:18:03.706301 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-28 01:18:03.706308 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-28 01:18:03.706315 | orchestrator | 2026-02-28 01:18:03.706322 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-28 01:18:03.706329 | orchestrator | Saturday 28 February 2026 01:13:26 +0000 (0:00:00.777) 0:04:56.928 ***** 2026-02-28 01:18:03.706336 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-28 01:18:03.706343 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-28 01:18:03.706378 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-28 01:18:03.706384 | orchestrator | 2026-02-28 01:18:03.706390 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-28 01:18:03.706396 | orchestrator | Saturday 28 February 2026 01:13:28 +0000 (0:00:01.425) 0:04:58.354 ***** 2026-02-28 01:18:03.706402 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-28 01:18:03.706408 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.706414 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-28 01:18:03.706420 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.706425 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-28 01:18:03.706431 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.706437 | orchestrator | 2026-02-28 01:18:03.706443 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-28 01:18:03.706449 | orchestrator | Saturday 28 February 2026 01:13:28 +0000 (0:00:00.629) 0:04:58.983 ***** 2026-02-28 01:18:03.706456 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 01:18:03.706462 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 01:18:03.706468 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.706474 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 01:18:03.706480 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 01:18:03.706486 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-28 01:18:03.706491 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.706498 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-28 01:18:03.706504 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-28 01:18:03.706510 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-28 01:18:03.706516 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.706522 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-28 01:18:03.706528 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-28 01:18:03.706534 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-28 01:18:03.706540 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-28 01:18:03.706550 | orchestrator | 2026-02-28 01:18:03.706557 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-28 01:18:03.706562 | orchestrator | Saturday 28 February 2026 01:13:30 +0000 (0:00:01.481) 0:05:00.465 ***** 2026-02-28 01:18:03.706568 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.706574 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.706581 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.706586 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.706593 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.706598 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.706604 | orchestrator | 2026-02-28 01:18:03.706610 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-28 01:18:03.706616 | orchestrator | Saturday 28 February 2026 01:13:31 +0000 (0:00:01.238) 0:05:01.703 ***** 2026-02-28 01:18:03.706622 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.706628 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.706634 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.706640 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.706646 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.706654 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.706663 | orchestrator | 2026-02-28 01:18:03.706673 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-28 01:18:03.706682 | orchestrator | Saturday 28 February 2026 01:13:33 +0000 (0:00:01.926) 0:05:03.630 ***** 2026-02-28 01:18:03.706692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706847 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706895 | orchestrator | 2026-02-28 01:18:03.706906 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:18:03.706921 | orchestrator | Saturday 28 February 2026 01:13:35 +0000 (0:00:02.405) 0:05:06.035 ***** 2026-02-28 01:18:03.706932 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:03.706943 | orchestrator | 2026-02-28 01:18:03.706952 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-28 01:18:03.706962 | orchestrator | Saturday 28 February 2026 01:13:37 +0000 (0:00:01.367) 0:05:07.403 ***** 2026-02-28 01:18:03.706972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.706982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.707295 | orchestrator | 2026-02-28 01:18:03.707301 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-28 01:18:03.707308 | orchestrator | Saturday 28 February 2026 01:13:41 +0000 (0:00:04.061) 0:05:11.464 ***** 2026-02-28 01:18:03.707323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.707335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.707342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707371 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.707380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.707386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.707396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707403 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.707413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.707423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.707429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707435 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.707442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.707449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707455 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.707465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.707479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707485 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.707492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.707503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707513 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.707522 | orchestrator | 2026-02-28 01:18:03.707531 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-28 01:18:03.707540 | orchestrator | Saturday 28 February 2026 01:13:43 +0000 (0:00:01.855) 0:05:13.319 ***** 2026-02-28 01:18:03.707550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.707559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.707576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707592 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.707605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.707616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.707626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.707635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.707660 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.707682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707692 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.707701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.707711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707720 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.707730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.707739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707748 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.707757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.707778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.707788 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.707797 | orchestrator | 2026-02-28 01:18:03.707806 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:18:03.707815 | orchestrator | Saturday 28 February 2026 01:13:45 +0000 (0:00:02.758) 0:05:16.077 ***** 2026-02-28 01:18:03.707824 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.707834 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.707847 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.707856 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:18:03.707865 | orchestrator | 2026-02-28 01:18:03.707874 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-28 01:18:03.707883 | orchestrator | Saturday 28 February 2026 01:13:47 +0000 (0:00:01.215) 0:05:17.293 ***** 2026-02-28 01:18:03.707892 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:18:03.707902 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:18:03.707911 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:18:03.707920 | orchestrator | 2026-02-28 01:18:03.707929 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-28 01:18:03.707938 | orchestrator | Saturday 28 February 2026 01:13:48 +0000 (0:00:01.105) 0:05:18.399 ***** 2026-02-28 01:18:03.707947 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:18:03.707955 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:18:03.707964 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:18:03.707974 | orchestrator | 2026-02-28 01:18:03.707983 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-28 01:18:03.707991 | orchestrator | Saturday 28 February 2026 01:13:49 +0000 (0:00:01.210) 0:05:19.610 ***** 2026-02-28 01:18:03.708001 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:18:03.708010 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:18:03.708019 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:18:03.708028 | orchestrator | 2026-02-28 01:18:03.708037 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-28 01:18:03.708046 | orchestrator | Saturday 28 February 2026 01:13:49 +0000 (0:00:00.534) 0:05:20.144 ***** 2026-02-28 01:18:03.708055 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:18:03.708064 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:18:03.708073 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:18:03.708082 | orchestrator | 2026-02-28 01:18:03.708091 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-28 01:18:03.708100 | orchestrator | Saturday 28 February 2026 01:13:50 +0000 (0:00:00.867) 0:05:21.011 ***** 2026-02-28 01:18:03.708109 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-28 01:18:03.708118 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-28 01:18:03.708127 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-28 01:18:03.708137 | orchestrator | 2026-02-28 01:18:03.708147 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-28 01:18:03.708162 | orchestrator | Saturday 28 February 2026 01:13:52 +0000 (0:00:01.277) 0:05:22.289 ***** 2026-02-28 01:18:03.708171 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-28 01:18:03.708179 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-28 01:18:03.708188 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-28 01:18:03.708197 | orchestrator | 2026-02-28 01:18:03.708205 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-28 01:18:03.708214 | orchestrator | Saturday 28 February 2026 01:13:53 +0000 (0:00:01.244) 0:05:23.533 ***** 2026-02-28 01:18:03.708223 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-28 01:18:03.708232 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-28 01:18:03.708241 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-28 01:18:03.708249 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-28 01:18:03.708258 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-28 01:18:03.708267 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-28 01:18:03.708277 | orchestrator | 2026-02-28 01:18:03.708286 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-28 01:18:03.708295 | orchestrator | Saturday 28 February 2026 01:13:57 +0000 (0:00:04.061) 0:05:27.595 ***** 2026-02-28 01:18:03.708304 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.708313 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.708323 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.708332 | orchestrator | 2026-02-28 01:18:03.708342 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-28 01:18:03.708374 | orchestrator | Saturday 28 February 2026 01:13:57 +0000 (0:00:00.576) 0:05:28.172 ***** 2026-02-28 01:18:03.708384 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.708393 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.708402 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.708411 | orchestrator | 2026-02-28 01:18:03.708420 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-28 01:18:03.708429 | orchestrator | Saturday 28 February 2026 01:13:58 +0000 (0:00:00.349) 0:05:28.521 ***** 2026-02-28 01:18:03.708438 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.708447 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.708456 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.708465 | orchestrator | 2026-02-28 01:18:03.708474 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-28 01:18:03.708484 | orchestrator | Saturday 28 February 2026 01:13:59 +0000 (0:00:01.335) 0:05:29.857 ***** 2026-02-28 01:18:03.708502 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-28 01:18:03.708513 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-28 01:18:03.708524 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-28 01:18:03.708536 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-28 01:18:03.708553 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-28 01:18:03.708565 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-28 01:18:03.708575 | orchestrator | 2026-02-28 01:18:03.708585 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-28 01:18:03.708595 | orchestrator | Saturday 28 February 2026 01:14:03 +0000 (0:00:03.588) 0:05:33.445 ***** 2026-02-28 01:18:03.708618 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 01:18:03.708627 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 01:18:03.708636 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 01:18:03.708645 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-28 01:18:03.708654 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.708663 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-28 01:18:03.708672 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.708681 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-28 01:18:03.708690 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.708699 | orchestrator | 2026-02-28 01:18:03.708708 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-02-28 01:18:03.708717 | orchestrator | Saturday 28 February 2026 01:14:06 +0000 (0:00:03.716) 0:05:37.162 ***** 2026-02-28 01:18:03.708726 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.708736 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.708745 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.708755 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-28 01:18:03.708765 | orchestrator | 2026-02-28 01:18:03.708774 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-02-28 01:18:03.708783 | orchestrator | Saturday 28 February 2026 01:14:08 +0000 (0:00:01.883) 0:05:39.046 ***** 2026-02-28 01:18:03.708793 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-28 01:18:03.708802 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:18:03.708810 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-28 01:18:03.708820 | orchestrator | 2026-02-28 01:18:03.708828 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-02-28 01:18:03.708837 | orchestrator | Saturday 28 February 2026 01:14:10 +0000 (0:00:01.378) 0:05:40.424 ***** 2026-02-28 01:18:03.708845 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.708853 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.708862 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.708871 | orchestrator | 2026-02-28 01:18:03.708880 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-28 01:18:03.708889 | orchestrator | Saturday 28 February 2026 01:14:10 +0000 (0:00:00.356) 0:05:40.781 ***** 2026-02-28 01:18:03.708898 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.708907 | orchestrator | 2026-02-28 01:18:03.708916 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-28 01:18:03.708925 | orchestrator | Saturday 28 February 2026 01:14:10 +0000 (0:00:00.133) 0:05:40.915 ***** 2026-02-28 01:18:03.708934 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.708945 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.708956 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.708965 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.708974 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.708983 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.708993 | orchestrator | 2026-02-28 01:18:03.709004 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-28 01:18:03.709014 | orchestrator | Saturday 28 February 2026 01:14:11 +0000 (0:00:00.638) 0:05:41.553 ***** 2026-02-28 01:18:03.709027 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-28 01:18:03.709039 | orchestrator | 2026-02-28 01:18:03.709048 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-28 01:18:03.709057 | orchestrator | Saturday 28 February 2026 01:14:12 +0000 (0:00:01.042) 0:05:42.596 ***** 2026-02-28 01:18:03.709069 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.709079 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.709089 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.709099 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.709105 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.709121 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.709127 | orchestrator | 2026-02-28 01:18:03.709133 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-28 01:18:03.709143 | orchestrator | Saturday 28 February 2026 01:14:13 +0000 (0:00:00.649) 0:05:43.245 ***** 2026-02-28 01:18:03.709168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709409 | orchestrator | 2026-02-28 01:18:03.709414 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-28 01:18:03.709420 | orchestrator | Saturday 28 February 2026 01:14:17 +0000 (0:00:04.132) 0:05:47.378 ***** 2026-02-28 01:18:03.709426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.709433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.709498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.709513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.709520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.709525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.709531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.709601 | orchestrator | 2026-02-28 01:18:03.709607 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-28 01:18:03.709612 | orchestrator | Saturday 28 February 2026 01:14:25 +0000 (0:00:08.042) 0:05:55.421 ***** 2026-02-28 01:18:03.709618 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.709624 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.709629 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.709635 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.709645 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.709651 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.709656 | orchestrator | 2026-02-28 01:18:03.709662 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-28 01:18:03.709667 | orchestrator | Saturday 28 February 2026 01:14:26 +0000 (0:00:01.480) 0:05:56.901 ***** 2026-02-28 01:18:03.709673 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-28 01:18:03.709679 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-28 01:18:03.709684 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-28 01:18:03.709693 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-28 01:18:03.709698 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-28 01:18:03.709704 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-28 01:18:03.709710 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.709715 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-28 01:18:03.709721 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-28 01:18:03.709726 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.709732 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-28 01:18:03.709738 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.709743 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-28 01:18:03.709749 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-28 01:18:03.709754 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-28 01:18:03.709760 | orchestrator | 2026-02-28 01:18:03.709765 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-28 01:18:03.709775 | orchestrator | Saturday 28 February 2026 01:14:30 +0000 (0:00:04.027) 0:06:00.929 ***** 2026-02-28 01:18:03.709781 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.709787 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.709792 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.709798 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.709803 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.709809 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.709814 | orchestrator | 2026-02-28 01:18:03.709820 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-28 01:18:03.709826 | orchestrator | Saturday 28 February 2026 01:14:31 +0000 (0:00:00.668) 0:06:01.597 ***** 2026-02-28 01:18:03.709831 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-28 01:18:03.709837 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-28 01:18:03.709842 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-28 01:18:03.709848 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-28 01:18:03.709854 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-28 01:18:03.709859 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-28 01:18:03.709865 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-28 01:18:03.709870 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-28 01:18:03.709876 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-28 01:18:03.709881 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-28 01:18:03.709887 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.709892 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-28 01:18:03.709898 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.709904 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-28 01:18:03.709909 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.709915 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:18:03.709920 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:18:03.709926 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:18:03.709931 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:18:03.709942 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:18:03.709947 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-28 01:18:03.709953 | orchestrator | 2026-02-28 01:18:03.709958 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-28 01:18:03.709964 | orchestrator | Saturday 28 February 2026 01:14:37 +0000 (0:00:05.927) 0:06:07.525 ***** 2026-02-28 01:18:03.709969 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 01:18:03.709978 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 01:18:03.709990 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:18:03.709995 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-28 01:18:03.710001 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:18:03.710006 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-28 01:18:03.710039 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-28 01:18:03.710046 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-28 01:18:03.710051 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-28 01:18:03.710057 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 01:18:03.710063 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 01:18:03.710068 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-28 01:18:03.710074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:18:03.710079 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-28 01:18:03.710085 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.710090 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-28 01:18:03.710096 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.710102 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-28 01:18:03.710107 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.710113 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:18:03.710118 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-28 01:18:03.710124 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:18:03.710130 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:18:03.710135 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-28 01:18:03.710141 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:18:03.710146 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:18:03.710152 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-28 01:18:03.710157 | orchestrator | 2026-02-28 01:18:03.710163 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-28 01:18:03.710169 | orchestrator | Saturday 28 February 2026 01:14:45 +0000 (0:00:07.797) 0:06:15.322 ***** 2026-02-28 01:18:03.710174 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.710180 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.710185 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.710191 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.710196 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.710202 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.710208 | orchestrator | 2026-02-28 01:18:03.710213 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-28 01:18:03.710219 | orchestrator | Saturday 28 February 2026 01:14:45 +0000 (0:00:00.895) 0:06:16.218 ***** 2026-02-28 01:18:03.710224 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.710230 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.710235 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.710241 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.710251 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.710257 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.710262 | orchestrator | 2026-02-28 01:18:03.710268 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-28 01:18:03.710273 | orchestrator | Saturday 28 February 2026 01:14:46 +0000 (0:00:00.671) 0:06:16.890 ***** 2026-02-28 01:18:03.710279 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.710284 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.710290 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.710295 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.710301 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.710307 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.710312 | orchestrator | 2026-02-28 01:18:03.710318 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-28 01:18:03.710323 | orchestrator | Saturday 28 February 2026 01:14:48 +0000 (0:00:02.206) 0:06:19.096 ***** 2026-02-28 01:18:03.710552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.710567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.710577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.710585 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.710594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.710610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.710626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.710636 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.710649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-28 01:18:03.710659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-28 01:18:03.710668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.710676 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.710691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.710700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.710709 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.710723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.710736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.710745 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.710754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-28 01:18:03.710763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-28 01:18:03.710771 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.710780 | orchestrator | 2026-02-28 01:18:03.710788 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-28 01:18:03.710797 | orchestrator | Saturday 28 February 2026 01:14:50 +0000 (0:00:01.861) 0:06:20.957 ***** 2026-02-28 01:18:03.710813 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-28 01:18:03.710822 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-28 01:18:03.710830 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.710839 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-28 01:18:03.710848 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-28 01:18:03.710856 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.710865 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-28 01:18:03.710873 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-28 01:18:03.710882 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.710890 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-28 01:18:03.710898 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-28 01:18:03.710907 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.710915 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-28 01:18:03.710924 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-28 01:18:03.710933 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.710941 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-28 01:18:03.710949 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-28 01:18:03.710958 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.710966 | orchestrator | 2026-02-28 01:18:03.710975 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-28 01:18:03.710983 | orchestrator | Saturday 28 February 2026 01:14:51 +0000 (0:00:01.019) 0:06:21.977 ***** 2026-02-28 01:18:03.710997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:03.711177 | orchestrator | 2026-02-28 01:18:03.711187 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-28 01:18:03.711196 | orchestrator | Saturday 28 February 2026 01:14:55 +0000 (0:00:03.431) 0:06:25.408 ***** 2026-02-28 01:18:03.711205 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.711214 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.711222 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.711231 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.711240 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.711248 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.711257 | orchestrator | 2026-02-28 01:18:03.711266 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:18:03.711274 | orchestrator | Saturday 28 February 2026 01:14:56 +0000 (0:00:00.931) 0:06:26.340 ***** 2026-02-28 01:18:03.711283 | orchestrator | 2026-02-28 01:18:03.711292 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:18:03.711300 | orchestrator | Saturday 28 February 2026 01:14:56 +0000 (0:00:00.137) 0:06:26.477 ***** 2026-02-28 01:18:03.711309 | orchestrator | 2026-02-28 01:18:03.711318 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:18:03.711326 | orchestrator | Saturday 28 February 2026 01:14:56 +0000 (0:00:00.150) 0:06:26.627 ***** 2026-02-28 01:18:03.711335 | orchestrator | 2026-02-28 01:18:03.711344 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:18:03.711372 | orchestrator | Saturday 28 February 2026 01:14:56 +0000 (0:00:00.144) 0:06:26.771 ***** 2026-02-28 01:18:03.711381 | orchestrator | 2026-02-28 01:18:03.711390 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:18:03.711399 | orchestrator | Saturday 28 February 2026 01:14:56 +0000 (0:00:00.147) 0:06:26.918 ***** 2026-02-28 01:18:03.711408 | orchestrator | 2026-02-28 01:18:03.711417 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-28 01:18:03.711425 | orchestrator | Saturday 28 February 2026 01:14:57 +0000 (0:00:00.324) 0:06:27.243 ***** 2026-02-28 01:18:03.711434 | orchestrator | 2026-02-28 01:18:03.711443 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-28 01:18:03.711452 | orchestrator | Saturday 28 February 2026 01:14:57 +0000 (0:00:00.145) 0:06:27.389 ***** 2026-02-28 01:18:03.711461 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.711469 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.711478 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.711486 | orchestrator | 2026-02-28 01:18:03.711495 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-28 01:18:03.711506 | orchestrator | Saturday 28 February 2026 01:15:10 +0000 (0:00:13.080) 0:06:40.469 ***** 2026-02-28 01:18:03.711520 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.711530 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.711536 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.711542 | orchestrator | 2026-02-28 01:18:03.711547 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-28 01:18:03.711553 | orchestrator | Saturday 28 February 2026 01:15:24 +0000 (0:00:14.550) 0:06:55.020 ***** 2026-02-28 01:18:03.711558 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.711563 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.711569 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.711574 | orchestrator | 2026-02-28 01:18:03.711580 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-28 01:18:03.711586 | orchestrator | Saturday 28 February 2026 01:15:43 +0000 (0:00:18.678) 0:07:13.698 ***** 2026-02-28 01:18:03.711591 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.711596 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.711602 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.711613 | orchestrator | 2026-02-28 01:18:03.711619 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-28 01:18:03.711624 | orchestrator | Saturday 28 February 2026 01:16:13 +0000 (0:00:30.288) 0:07:43.987 ***** 2026-02-28 01:18:03.711635 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-28 01:18:03.711641 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-28 01:18:03.711647 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-28 01:18:03.711652 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.711658 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.711663 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.711669 | orchestrator | 2026-02-28 01:18:03.711674 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-28 01:18:03.711687 | orchestrator | Saturday 28 February 2026 01:16:20 +0000 (0:00:06.309) 0:07:50.297 ***** 2026-02-28 01:18:03.711693 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.711698 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.711704 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.711709 | orchestrator | 2026-02-28 01:18:03.711714 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-28 01:18:03.711720 | orchestrator | Saturday 28 February 2026 01:16:21 +0000 (0:00:01.039) 0:07:51.336 ***** 2026-02-28 01:18:03.711725 | orchestrator | changed: [testbed-node-4] 2026-02-28 01:18:03.711731 | orchestrator | changed: [testbed-node-3] 2026-02-28 01:18:03.711736 | orchestrator | changed: [testbed-node-5] 2026-02-28 01:18:03.711742 | orchestrator | 2026-02-28 01:18:03.711747 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-28 01:18:03.711753 | orchestrator | Saturday 28 February 2026 01:16:43 +0000 (0:00:21.940) 0:08:13.277 ***** 2026-02-28 01:18:03.711758 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.711764 | orchestrator | 2026-02-28 01:18:03.711769 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-28 01:18:03.711774 | orchestrator | Saturday 28 February 2026 01:16:43 +0000 (0:00:00.129) 0:08:13.406 ***** 2026-02-28 01:18:03.711780 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.711785 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.711790 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.711796 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.711801 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.711807 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-28 01:18:03.711813 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:18:03.711818 | orchestrator | 2026-02-28 01:18:03.711823 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-28 01:18:03.711829 | orchestrator | Saturday 28 February 2026 01:17:06 +0000 (0:00:23.521) 0:08:36.928 ***** 2026-02-28 01:18:03.711834 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.711840 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.711845 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.711850 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.711856 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.711861 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.711866 | orchestrator | 2026-02-28 01:18:03.711872 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-28 01:18:03.711877 | orchestrator | Saturday 28 February 2026 01:17:18 +0000 (0:00:11.816) 0:08:48.744 ***** 2026-02-28 01:18:03.711883 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.711888 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.711893 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.711899 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.711908 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.711914 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-02-28 01:18:03.711919 | orchestrator | 2026-02-28 01:18:03.711925 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-28 01:18:03.711930 | orchestrator | Saturday 28 February 2026 01:17:23 +0000 (0:00:05.068) 0:08:53.812 ***** 2026-02-28 01:18:03.711936 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:18:03.711941 | orchestrator | 2026-02-28 01:18:03.711946 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-28 01:18:03.711952 | orchestrator | Saturday 28 February 2026 01:17:38 +0000 (0:00:14.779) 0:09:08.591 ***** 2026-02-28 01:18:03.711957 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:18:03.711963 | orchestrator | 2026-02-28 01:18:03.711968 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-28 01:18:03.711974 | orchestrator | Saturday 28 February 2026 01:17:40 +0000 (0:00:01.655) 0:09:10.247 ***** 2026-02-28 01:18:03.711979 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.711984 | orchestrator | 2026-02-28 01:18:03.711990 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-28 01:18:03.711995 | orchestrator | Saturday 28 February 2026 01:17:41 +0000 (0:00:01.664) 0:09:11.912 ***** 2026-02-28 01:18:03.712000 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-28 01:18:03.712006 | orchestrator | 2026-02-28 01:18:03.712011 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-28 01:18:03.712017 | orchestrator | Saturday 28 February 2026 01:17:54 +0000 (0:00:12.941) 0:09:24.853 ***** 2026-02-28 01:18:03.712022 | orchestrator | ok: [testbed-node-3] 2026-02-28 01:18:03.712028 | orchestrator | ok: [testbed-node-4] 2026-02-28 01:18:03.712033 | orchestrator | ok: [testbed-node-5] 2026-02-28 01:18:03.712038 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:03.712044 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:18:03.712049 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:18:03.712055 | orchestrator | 2026-02-28 01:18:03.712060 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-28 01:18:03.712065 | orchestrator | 2026-02-28 01:18:03.712071 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-28 01:18:03.712076 | orchestrator | Saturday 28 February 2026 01:17:56 +0000 (0:00:02.054) 0:09:26.908 ***** 2026-02-28 01:18:03.712082 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:03.712091 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:03.712096 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:03.712102 | orchestrator | 2026-02-28 01:18:03.712107 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-28 01:18:03.712113 | orchestrator | 2026-02-28 01:18:03.712118 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-28 01:18:03.712124 | orchestrator | Saturday 28 February 2026 01:17:57 +0000 (0:00:01.316) 0:09:28.225 ***** 2026-02-28 01:18:03.712130 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.712139 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.712148 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.712156 | orchestrator | 2026-02-28 01:18:03.712167 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-28 01:18:03.712181 | orchestrator | 2026-02-28 01:18:03.712196 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-28 01:18:03.712202 | orchestrator | Saturday 28 February 2026 01:17:58 +0000 (0:00:00.599) 0:09:28.825 ***** 2026-02-28 01:18:03.712208 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-28 01:18:03.712213 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-28 01:18:03.712219 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-28 01:18:03.712224 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-28 01:18:03.712237 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-28 01:18:03.712243 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-28 01:18:03.712248 | orchestrator | skipping: [testbed-node-3] 2026-02-28 01:18:03.712254 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-28 01:18:03.712259 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-28 01:18:03.712265 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-28 01:18:03.712270 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-28 01:18:03.712275 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-28 01:18:03.712281 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-28 01:18:03.712286 | orchestrator | skipping: [testbed-node-4] 2026-02-28 01:18:03.712292 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-28 01:18:03.712297 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-28 01:18:03.712303 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-28 01:18:03.712308 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-28 01:18:03.712313 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-28 01:18:03.712319 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-28 01:18:03.712324 | orchestrator | skipping: [testbed-node-5] 2026-02-28 01:18:03.712330 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-28 01:18:03.712335 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-28 01:18:03.712341 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-28 01:18:03.712346 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-28 01:18:03.712397 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-28 01:18:03.712403 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-28 01:18:03.712408 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.712414 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-28 01:18:03.712420 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-28 01:18:03.712425 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-28 01:18:03.712431 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-28 01:18:03.712436 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-28 01:18:03.712441 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-28 01:18:03.712447 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.712453 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-28 01:18:03.712458 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-28 01:18:03.712464 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-28 01:18:03.712469 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-28 01:18:03.712475 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-28 01:18:03.712480 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-28 01:18:03.712486 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.712492 | orchestrator | 2026-02-28 01:18:03.712497 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-28 01:18:03.712503 | orchestrator | 2026-02-28 01:18:03.712508 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-28 01:18:03.712514 | orchestrator | Saturday 28 February 2026 01:18:00 +0000 (0:00:01.564) 0:09:30.390 ***** 2026-02-28 01:18:03.712519 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-28 01:18:03.712525 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-28 01:18:03.712530 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-28 01:18:03.712541 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-28 01:18:03.712547 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.712552 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.712558 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-28 01:18:03.712563 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-28 01:18:03.712569 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.712574 | orchestrator | 2026-02-28 01:18:03.712580 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-28 01:18:03.712585 | orchestrator | 2026-02-28 01:18:03.712595 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-28 01:18:03.712601 | orchestrator | Saturday 28 February 2026 01:18:01 +0000 (0:00:01.041) 0:09:31.431 ***** 2026-02-28 01:18:03.712609 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.712618 | orchestrator | 2026-02-28 01:18:03.712630 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-28 01:18:03.712645 | orchestrator | 2026-02-28 01:18:03.712653 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-28 01:18:03.712662 | orchestrator | Saturday 28 February 2026 01:18:02 +0000 (0:00:00.871) 0:09:32.303 ***** 2026-02-28 01:18:03.712671 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:03.712679 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:03.712693 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:03.712701 | orchestrator | 2026-02-28 01:18:03.712709 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:18:03.712719 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:18:03.712730 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-02-28 01:18:03.712739 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-02-28 01:18:03.712749 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-02-28 01:18:03.712759 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-28 01:18:03.712768 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-28 01:18:03.712778 | orchestrator | testbed-node-5 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-28 01:18:03.712784 | orchestrator | 2026-02-28 01:18:03.712789 | orchestrator | 2026-02-28 01:18:03.712795 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:18:03.712800 | orchestrator | Saturday 28 February 2026 01:18:02 +0000 (0:00:00.684) 0:09:32.988 ***** 2026-02-28 01:18:03.712806 | orchestrator | =============================================================================== 2026-02-28 01:18:03.712812 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.69s 2026-02-28 01:18:03.712817 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.29s 2026-02-28 01:18:03.712823 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.53s 2026-02-28 01:18:03.712828 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.64s 2026-02-28 01:18:03.712833 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.52s 2026-02-28 01:18:03.712839 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 22.30s 2026-02-28 01:18:03.712844 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.94s 2026-02-28 01:18:03.712857 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.68s 2026-02-28 01:18:03.712863 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 18.21s 2026-02-28 01:18:03.712868 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.78s 2026-02-28 01:18:03.712874 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.58s 2026-02-28 01:18:03.712879 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.55s 2026-02-28 01:18:03.712884 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.33s 2026-02-28 01:18:03.712890 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.33s 2026-02-28 01:18:03.712895 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.08s 2026-02-28 01:18:03.712901 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.94s 2026-02-28 01:18:03.712906 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.82s 2026-02-28 01:18:03.712912 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.79s 2026-02-28 01:18:03.712917 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.29s 2026-02-28 01:18:03.712922 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.04s 2026-02-28 01:18:03.712928 | orchestrator | 2026-02-28 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:06.737918 | orchestrator | 2026-02-28 01:18:06 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:06.738118 | orchestrator | 2026-02-28 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:09.779623 | orchestrator | 2026-02-28 01:18:09 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:09.779714 | orchestrator | 2026-02-28 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:12.826561 | orchestrator | 2026-02-28 01:18:12 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:12.826696 | orchestrator | 2026-02-28 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:15.868112 | orchestrator | 2026-02-28 01:18:15 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:15.868195 | orchestrator | 2026-02-28 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:18.913890 | orchestrator | 2026-02-28 01:18:18 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:18.913969 | orchestrator | 2026-02-28 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:21.949910 | orchestrator | 2026-02-28 01:18:21 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:21.950014 | orchestrator | 2026-02-28 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:24.996978 | orchestrator | 2026-02-28 01:18:24 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:24.997077 | orchestrator | 2026-02-28 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:28.038413 | orchestrator | 2026-02-28 01:18:28 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:28.038501 | orchestrator | 2026-02-28 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:31.088809 | orchestrator | 2026-02-28 01:18:31 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:31.088944 | orchestrator | 2026-02-28 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:34.128980 | orchestrator | 2026-02-28 01:18:34 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:34.129095 | orchestrator | 2026-02-28 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:37.165633 | orchestrator | 2026-02-28 01:18:37 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state STARTED 2026-02-28 01:18:37.165730 | orchestrator | 2026-02-28 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-02-28 01:18:40.207050 | orchestrator | 2026-02-28 01:18:40.207193 | orchestrator | 2026-02-28 01:18:40.207286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-28 01:18:40.207301 | orchestrator | 2026-02-28 01:18:40.207313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-28 01:18:40.207520 | orchestrator | Saturday 28 February 2026 01:13:25 +0000 (0:00:00.306) 0:00:00.306 ***** 2026-02-28 01:18:40.207537 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.207550 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:18:40.207560 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:18:40.207571 | orchestrator | 2026-02-28 01:18:40.207583 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-28 01:18:40.207595 | orchestrator | Saturday 28 February 2026 01:13:26 +0000 (0:00:00.338) 0:00:00.644 ***** 2026-02-28 01:18:40.207608 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-28 01:18:40.207621 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-28 01:18:40.207633 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-28 01:18:40.207644 | orchestrator | 2026-02-28 01:18:40.207655 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-28 01:18:40.207666 | orchestrator | 2026-02-28 01:18:40.207676 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:18:40.207688 | orchestrator | Saturday 28 February 2026 01:13:26 +0000 (0:00:00.567) 0:00:01.212 ***** 2026-02-28 01:18:40.207701 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:40.207715 | orchestrator | 2026-02-28 01:18:40.207726 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-28 01:18:40.207737 | orchestrator | Saturday 28 February 2026 01:13:27 +0000 (0:00:00.689) 0:00:01.902 ***** 2026-02-28 01:18:40.207749 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-28 01:18:40.207760 | orchestrator | 2026-02-28 01:18:40.207771 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-28 01:18:40.207783 | orchestrator | Saturday 28 February 2026 01:13:31 +0000 (0:00:03.806) 0:00:05.708 ***** 2026-02-28 01:18:40.207796 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-28 01:18:40.207809 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-28 01:18:40.207820 | orchestrator | 2026-02-28 01:18:40.207831 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-28 01:18:40.207843 | orchestrator | Saturday 28 February 2026 01:13:38 +0000 (0:00:07.167) 0:00:12.875 ***** 2026-02-28 01:18:40.207855 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-28 01:18:40.207865 | orchestrator | 2026-02-28 01:18:40.207877 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-28 01:18:40.207889 | orchestrator | Saturday 28 February 2026 01:13:42 +0000 (0:00:03.855) 0:00:16.731 ***** 2026-02-28 01:18:40.207901 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-28 01:18:40.207913 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-28 01:18:40.207924 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-28 01:18:40.207935 | orchestrator | 2026-02-28 01:18:40.207947 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-28 01:18:40.207987 | orchestrator | Saturday 28 February 2026 01:13:50 +0000 (0:00:08.775) 0:00:25.507 ***** 2026-02-28 01:18:40.207999 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-28 01:18:40.208010 | orchestrator | 2026-02-28 01:18:40.208022 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-28 01:18:40.208033 | orchestrator | Saturday 28 February 2026 01:13:54 +0000 (0:00:03.599) 0:00:29.107 ***** 2026-02-28 01:18:40.208060 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-28 01:18:40.208074 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-28 01:18:40.208085 | orchestrator | 2026-02-28 01:18:40.208129 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-28 01:18:40.208144 | orchestrator | Saturday 28 February 2026 01:14:02 +0000 (0:00:07.925) 0:00:37.033 ***** 2026-02-28 01:18:40.208154 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-28 01:18:40.208165 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-28 01:18:40.208176 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-28 01:18:40.208187 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-28 01:18:40.208198 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-28 01:18:40.208210 | orchestrator | 2026-02-28 01:18:40.208221 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:18:40.208233 | orchestrator | Saturday 28 February 2026 01:14:19 +0000 (0:00:17.067) 0:00:54.101 ***** 2026-02-28 01:18:40.208244 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:40.208256 | orchestrator | 2026-02-28 01:18:40.208268 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-28 01:18:40.208280 | orchestrator | Saturday 28 February 2026 01:14:20 +0000 (0:00:01.235) 0:00:55.336 ***** 2026-02-28 01:18:40.208292 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.208392 | orchestrator | 2026-02-28 01:18:40.208405 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-28 01:18:40.208415 | orchestrator | Saturday 28 February 2026 01:14:26 +0000 (0:00:06.025) 0:01:01.362 ***** 2026-02-28 01:18:40.208425 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.208435 | orchestrator | 2026-02-28 01:18:40.208446 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-28 01:18:40.208481 | orchestrator | Saturday 28 February 2026 01:14:31 +0000 (0:00:04.876) 0:01:06.239 ***** 2026-02-28 01:18:40.208493 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.208503 | orchestrator | 2026-02-28 01:18:40.208514 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-28 01:18:40.208525 | orchestrator | Saturday 28 February 2026 01:14:35 +0000 (0:00:03.708) 0:01:09.947 ***** 2026-02-28 01:18:40.208536 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-28 01:18:40.208547 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-28 01:18:40.208559 | orchestrator | 2026-02-28 01:18:40.208571 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-28 01:18:40.208583 | orchestrator | Saturday 28 February 2026 01:14:46 +0000 (0:00:11.513) 0:01:21.461 ***** 2026-02-28 01:18:40.208594 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-28 01:18:40.208606 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-28 01:18:40.208619 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-28 01:18:40.208630 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-28 01:18:40.208656 | orchestrator | 2026-02-28 01:18:40.208667 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-28 01:18:40.208678 | orchestrator | Saturday 28 February 2026 01:15:05 +0000 (0:00:18.435) 0:01:39.896 ***** 2026-02-28 01:18:40.208689 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.208700 | orchestrator | 2026-02-28 01:18:40.208711 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-28 01:18:40.208722 | orchestrator | Saturday 28 February 2026 01:15:10 +0000 (0:00:05.238) 0:01:45.134 ***** 2026-02-28 01:18:40.208733 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.208744 | orchestrator | 2026-02-28 01:18:40.208755 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-28 01:18:40.208766 | orchestrator | Saturday 28 February 2026 01:15:16 +0000 (0:00:05.948) 0:01:51.083 ***** 2026-02-28 01:18:40.208778 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.208789 | orchestrator | 2026-02-28 01:18:40.208800 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-28 01:18:40.208811 | orchestrator | Saturday 28 February 2026 01:15:16 +0000 (0:00:00.285) 0:01:51.369 ***** 2026-02-28 01:18:40.208822 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.208834 | orchestrator | 2026-02-28 01:18:40.208845 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:18:40.208857 | orchestrator | Saturday 28 February 2026 01:15:21 +0000 (0:00:04.234) 0:01:55.603 ***** 2026-02-28 01:18:40.208867 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:40.208877 | orchestrator | 2026-02-28 01:18:40.208886 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-28 01:18:40.208896 | orchestrator | Saturday 28 February 2026 01:15:22 +0000 (0:00:01.181) 0:01:56.784 ***** 2026-02-28 01:18:40.208906 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.208916 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.208927 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.208939 | orchestrator | 2026-02-28 01:18:40.208951 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-28 01:18:40.208961 | orchestrator | Saturday 28 February 2026 01:15:28 +0000 (0:00:05.848) 0:02:02.633 ***** 2026-02-28 01:18:40.208971 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.208990 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.209000 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.209011 | orchestrator | 2026-02-28 01:18:40.209022 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-28 01:18:40.209034 | orchestrator | Saturday 28 February 2026 01:15:33 +0000 (0:00:05.009) 0:02:07.642 ***** 2026-02-28 01:18:40.209045 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.209054 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.209064 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.209074 | orchestrator | 2026-02-28 01:18:40.209084 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-28 01:18:40.209094 | orchestrator | Saturday 28 February 2026 01:15:33 +0000 (0:00:00.920) 0:02:08.562 ***** 2026-02-28 01:18:40.209105 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209116 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:18:40.209127 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:18:40.209139 | orchestrator | 2026-02-28 01:18:40.209149 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-28 01:18:40.209159 | orchestrator | Saturday 28 February 2026 01:15:35 +0000 (0:00:01.845) 0:02:10.408 ***** 2026-02-28 01:18:40.209170 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.209180 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.209190 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.209201 | orchestrator | 2026-02-28 01:18:40.209212 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-28 01:18:40.209223 | orchestrator | Saturday 28 February 2026 01:15:36 +0000 (0:00:01.095) 0:02:11.503 ***** 2026-02-28 01:18:40.209244 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.209254 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.209265 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.209275 | orchestrator | 2026-02-28 01:18:40.209285 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-28 01:18:40.209296 | orchestrator | Saturday 28 February 2026 01:15:37 +0000 (0:00:01.018) 0:02:12.522 ***** 2026-02-28 01:18:40.209307 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.209319 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.209355 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.209365 | orchestrator | 2026-02-28 01:18:40.209400 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-28 01:18:40.209413 | orchestrator | Saturday 28 February 2026 01:15:39 +0000 (0:00:01.844) 0:02:14.366 ***** 2026-02-28 01:18:40.209423 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.209433 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.209443 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.209453 | orchestrator | 2026-02-28 01:18:40.209463 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-28 01:18:40.209474 | orchestrator | Saturday 28 February 2026 01:15:41 +0000 (0:00:01.932) 0:02:16.298 ***** 2026-02-28 01:18:40.209485 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209497 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:18:40.209507 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:18:40.209518 | orchestrator | 2026-02-28 01:18:40.209528 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-28 01:18:40.209538 | orchestrator | Saturday 28 February 2026 01:15:42 +0000 (0:00:00.687) 0:02:16.986 ***** 2026-02-28 01:18:40.209548 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:18:40.209558 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:18:40.209568 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209578 | orchestrator | 2026-02-28 01:18:40.209589 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:18:40.209601 | orchestrator | Saturday 28 February 2026 01:15:46 +0000 (0:00:04.204) 0:02:21.190 ***** 2026-02-28 01:18:40.209612 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:40.209623 | orchestrator | 2026-02-28 01:18:40.209633 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-28 01:18:40.209643 | orchestrator | Saturday 28 February 2026 01:15:47 +0000 (0:00:00.867) 0:02:22.058 ***** 2026-02-28 01:18:40.209653 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209663 | orchestrator | 2026-02-28 01:18:40.209673 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-28 01:18:40.209683 | orchestrator | Saturday 28 February 2026 01:15:51 +0000 (0:00:03.922) 0:02:25.980 ***** 2026-02-28 01:18:40.209694 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209704 | orchestrator | 2026-02-28 01:18:40.209714 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-28 01:18:40.209724 | orchestrator | Saturday 28 February 2026 01:15:55 +0000 (0:00:03.656) 0:02:29.637 ***** 2026-02-28 01:18:40.209736 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-28 01:18:40.209747 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-28 01:18:40.209758 | orchestrator | 2026-02-28 01:18:40.209769 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-28 01:18:40.209780 | orchestrator | Saturday 28 February 2026 01:16:02 +0000 (0:00:07.723) 0:02:37.360 ***** 2026-02-28 01:18:40.209790 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209800 | orchestrator | 2026-02-28 01:18:40.209810 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-28 01:18:40.209820 | orchestrator | Saturday 28 February 2026 01:16:06 +0000 (0:00:03.959) 0:02:41.319 ***** 2026-02-28 01:18:40.209831 | orchestrator | ok: [testbed-node-0] 2026-02-28 01:18:40.209841 | orchestrator | ok: [testbed-node-1] 2026-02-28 01:18:40.209862 | orchestrator | ok: [testbed-node-2] 2026-02-28 01:18:40.209872 | orchestrator | 2026-02-28 01:18:40.209883 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-28 01:18:40.209893 | orchestrator | Saturday 28 February 2026 01:16:07 +0000 (0:00:00.355) 0:02:41.674 ***** 2026-02-28 01:18:40.209916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.209951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.209963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.209975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.209988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.210009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.210168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.210307 | orchestrator | 2026-02-28 01:18:40.210318 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-28 01:18:40.210356 | orchestrator | Saturday 28 February 2026 01:16:09 +0000 (0:00:02.731) 0:02:44.405 ***** 2026-02-28 01:18:40.210367 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.210379 | orchestrator | 2026-02-28 01:18:40.210398 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-28 01:18:40.210410 | orchestrator | Saturday 28 February 2026 01:16:09 +0000 (0:00:00.143) 0:02:44.548 ***** 2026-02-28 01:18:40.210421 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.210432 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:40.210442 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:40.210459 | orchestrator | 2026-02-28 01:18:40.210472 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-28 01:18:40.210483 | orchestrator | Saturday 28 February 2026 01:16:10 +0000 (0:00:00.653) 0:02:45.202 ***** 2026-02-28 01:18:40.210494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.210531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.210543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.210561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.210574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.210586 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.210608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.210620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.210639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.210651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.210675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.210754 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:40.210869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.210925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.210941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.210983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.210997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.211009 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:40.211021 | orchestrator | 2026-02-28 01:18:40.211034 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:18:40.211046 | orchestrator | Saturday 28 February 2026 01:16:11 +0000 (0:00:00.879) 0:02:46.081 ***** 2026-02-28 01:18:40.211057 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-28 01:18:40.211069 | orchestrator | 2026-02-28 01:18:40.211080 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-28 01:18:40.211115 | orchestrator | Saturday 28 February 2026 01:16:12 +0000 (0:00:00.689) 0:02:46.770 ***** 2026-02-28 01:18:40.211156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.211182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'hapr2026-02-28 01:18:40 | INFO  | Task d4c3e955-ff16-4c1f-8e12-cb2421d8291d is in state SUCCESS 2026-02-28 01:18:40.211197 | orchestrator | oxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.211219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.211232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.211244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.211262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.211274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.211478 | orchestrator | 2026-02-28 01:18:40.211490 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-28 01:18:40.211501 | orchestrator | Saturday 28 February 2026 01:16:17 +0000 (0:00:05.705) 0:02:52.476 ***** 2026-02-28 01:18:40.211513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.211524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.211536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.211585 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.211604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.211626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.211638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.211689 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:40.211701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.211727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.211740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.211776 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:40.211788 | orchestrator | 2026-02-28 01:18:40.211799 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-28 01:18:40.211811 | orchestrator | Saturday 28 February 2026 01:16:18 +0000 (0:00:00.767) 0:02:53.244 ***** 2026-02-28 01:18:40.211829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.211842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.211869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.211905 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.211917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.211933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.211945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.211984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.211996 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:40.212008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-28 01:18:40.212019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-28 01:18:40.212031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.212048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-28 01:18:40.212067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-28 01:18:40.212079 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:40.212090 | orchestrator | 2026-02-28 01:18:40.212102 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-28 01:18:40.212113 | orchestrator | Saturday 28 February 2026 01:16:19 +0000 (0:00:00.999) 0:02:54.243 ***** 2026-02-28 01:18:40.212133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.212146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.212164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.212184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.212196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.212213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.212226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212377 | orchestrator | 2026-02-28 01:18:40.212389 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-28 01:18:40.212400 | orchestrator | Saturday 28 February 2026 01:16:26 +0000 (0:00:06.466) 0:03:00.710 ***** 2026-02-28 01:18:40.212412 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-28 01:18:40.212424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-28 01:18:40.212436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-28 01:18:40.212448 | orchestrator | 2026-02-28 01:18:40.212459 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-28 01:18:40.212470 | orchestrator | Saturday 28 February 2026 01:16:28 +0000 (0:00:02.262) 0:03:02.972 ***** 2026-02-28 01:18:40.212498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.212511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.212534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.212546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.212558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.212570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.212595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.212725 | orchestrator | 2026-02-28 01:18:40.212737 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-28 01:18:40.212749 | orchestrator | Saturday 28 February 2026 01:16:47 +0000 (0:00:18.803) 0:03:21.776 ***** 2026-02-28 01:18:40.212760 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.212772 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.212784 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.212795 | orchestrator | 2026-02-28 01:18:40.212806 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-28 01:18:40.212817 | orchestrator | Saturday 28 February 2026 01:16:49 +0000 (0:00:02.380) 0:03:24.156 ***** 2026-02-28 01:18:40.212835 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.212847 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.212859 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.212870 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.212882 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.212894 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.212905 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.212917 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.212928 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.212940 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-28 01:18:40.212951 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-28 01:18:40.212963 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-28 01:18:40.212975 | orchestrator | 2026-02-28 01:18:40.212986 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-28 01:18:40.212997 | orchestrator | Saturday 28 February 2026 01:16:55 +0000 (0:00:05.572) 0:03:29.729 ***** 2026-02-28 01:18:40.213008 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.213028 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.213039 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.213051 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.213062 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.213073 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.213084 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.213096 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.213107 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.213118 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-28 01:18:40.213130 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-28 01:18:40.213142 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-28 01:18:40.213153 | orchestrator | 2026-02-28 01:18:40.213165 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-28 01:18:40.213176 | orchestrator | Saturday 28 February 2026 01:17:01 +0000 (0:00:06.022) 0:03:35.751 ***** 2026-02-28 01:18:40.213187 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.213199 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.213210 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-28 01:18:40.213221 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.213233 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.213244 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-28 01:18:40.213256 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.213267 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.213278 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-28 01:18:40.213289 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-28 01:18:40.213300 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-28 01:18:40.213317 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-28 01:18:40.213360 | orchestrator | 2026-02-28 01:18:40.213372 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-28 01:18:40.213384 | orchestrator | Saturday 28 February 2026 01:17:06 +0000 (0:00:05.425) 0:03:41.177 ***** 2026-02-28 01:18:40.213397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.213419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.213442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-28 01:18:40.213455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.213467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.213483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-28 01:18:40.213495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-28 01:18:40.213725 | orchestrator | 2026-02-28 01:18:40.213736 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-28 01:18:40.213747 | orchestrator | Saturday 28 February 2026 01:17:11 +0000 (0:00:04.845) 0:03:46.022 ***** 2026-02-28 01:18:40.213758 | orchestrator | skipping: [testbed-node-0] 2026-02-28 01:18:40.213769 | orchestrator | skipping: [testbed-node-1] 2026-02-28 01:18:40.213780 | orchestrator | skipping: [testbed-node-2] 2026-02-28 01:18:40.213791 | orchestrator | 2026-02-28 01:18:40.213802 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-28 01:18:40.213813 | orchestrator | Saturday 28 February 2026 01:17:12 +0000 (0:00:00.715) 0:03:46.738 ***** 2026-02-28 01:18:40.213825 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.213836 | orchestrator | 2026-02-28 01:18:40.213847 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-28 01:18:40.213858 | orchestrator | Saturday 28 February 2026 01:17:15 +0000 (0:00:02.861) 0:03:49.600 ***** 2026-02-28 01:18:40.213869 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.213880 | orchestrator | 2026-02-28 01:18:40.213891 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-28 01:18:40.213902 | orchestrator | Saturday 28 February 2026 01:17:17 +0000 (0:00:02.782) 0:03:52.383 ***** 2026-02-28 01:18:40.213913 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.213924 | orchestrator | 2026-02-28 01:18:40.213935 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-28 01:18:40.213946 | orchestrator | Saturday 28 February 2026 01:17:20 +0000 (0:00:02.589) 0:03:54.973 ***** 2026-02-28 01:18:40.213958 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.213969 | orchestrator | 2026-02-28 01:18:40.213981 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-28 01:18:40.213991 | orchestrator | Saturday 28 February 2026 01:17:24 +0000 (0:00:03.909) 0:03:58.882 ***** 2026-02-28 01:18:40.214003 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.214097 | orchestrator | 2026-02-28 01:18:40.214117 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-28 01:18:40.214128 | orchestrator | Saturday 28 February 2026 01:17:47 +0000 (0:00:23.380) 0:04:22.263 ***** 2026-02-28 01:18:40.214140 | orchestrator | 2026-02-28 01:18:40.214162 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-28 01:18:40.214174 | orchestrator | Saturday 28 February 2026 01:17:47 +0000 (0:00:00.082) 0:04:22.345 ***** 2026-02-28 01:18:40.214185 | orchestrator | 2026-02-28 01:18:40.214197 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-28 01:18:40.214208 | orchestrator | Saturday 28 February 2026 01:17:47 +0000 (0:00:00.074) 0:04:22.420 ***** 2026-02-28 01:18:40.214219 | orchestrator | 2026-02-28 01:18:40.214230 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-28 01:18:40.214241 | orchestrator | Saturday 28 February 2026 01:17:47 +0000 (0:00:00.079) 0:04:22.499 ***** 2026-02-28 01:18:40.214254 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.214265 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.214276 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.214287 | orchestrator | 2026-02-28 01:18:40.214299 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-28 01:18:40.214310 | orchestrator | Saturday 28 February 2026 01:17:59 +0000 (0:00:11.814) 0:04:34.314 ***** 2026-02-28 01:18:40.214424 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.214438 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.214450 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.214461 | orchestrator | 2026-02-28 01:18:40.214480 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-28 01:18:40.214490 | orchestrator | Saturday 28 February 2026 01:18:08 +0000 (0:00:08.720) 0:04:43.035 ***** 2026-02-28 01:18:40.214500 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.214511 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.214521 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.214532 | orchestrator | 2026-02-28 01:18:40.214542 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-28 01:18:40.214552 | orchestrator | Saturday 28 February 2026 01:18:17 +0000 (0:00:09.195) 0:04:52.230 ***** 2026-02-28 01:18:40.214562 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.214573 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.214582 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.214593 | orchestrator | 2026-02-28 01:18:40.214603 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-28 01:18:40.214613 | orchestrator | Saturday 28 February 2026 01:18:28 +0000 (0:00:10.587) 0:05:02.817 ***** 2026-02-28 01:18:40.214622 | orchestrator | changed: [testbed-node-0] 2026-02-28 01:18:40.214632 | orchestrator | changed: [testbed-node-1] 2026-02-28 01:18:40.214643 | orchestrator | changed: [testbed-node-2] 2026-02-28 01:18:40.214652 | orchestrator | 2026-02-28 01:18:40.214663 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:18:40.214674 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-28 01:18:40.214686 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:18:40.214697 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-28 01:18:40.214707 | orchestrator | 2026-02-28 01:18:40.214717 | orchestrator | 2026-02-28 01:18:40.214727 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:18:40.214737 | orchestrator | Saturday 28 February 2026 01:18:39 +0000 (0:00:11.153) 0:05:13.971 ***** 2026-02-28 01:18:40.214747 | orchestrator | =============================================================================== 2026-02-28 01:18:40.214758 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.38s 2026-02-28 01:18:40.214768 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.80s 2026-02-28 01:18:40.214778 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.44s 2026-02-28 01:18:40.214787 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.07s 2026-02-28 01:18:40.214797 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.82s 2026-02-28 01:18:40.214807 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.51s 2026-02-28 01:18:40.214818 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.15s 2026-02-28 01:18:40.214827 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.59s 2026-02-28 01:18:40.214837 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.20s 2026-02-28 01:18:40.214847 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.78s 2026-02-28 01:18:40.214857 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.72s 2026-02-28 01:18:40.214867 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.93s 2026-02-28 01:18:40.214877 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.72s 2026-02-28 01:18:40.214895 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.17s 2026-02-28 01:18:40.214905 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.47s 2026-02-28 01:18:40.214914 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.03s 2026-02-28 01:18:40.214924 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.02s 2026-02-28 01:18:40.214934 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.95s 2026-02-28 01:18:40.214944 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.85s 2026-02-28 01:18:40.214954 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.71s 2026-02-28 01:18:40.214971 | orchestrator | 2026-02-28 01:18:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:43.244150 | orchestrator | 2026-02-28 01:18:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:46.286971 | orchestrator | 2026-02-28 01:18:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:49.336623 | orchestrator | 2026-02-28 01:18:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:52.383880 | orchestrator | 2026-02-28 01:18:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:55.421923 | orchestrator | 2026-02-28 01:18:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:18:58.460041 | orchestrator | 2026-02-28 01:18:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:01.499507 | orchestrator | 2026-02-28 01:19:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:04.540331 | orchestrator | 2026-02-28 01:19:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:07.580626 | orchestrator | 2026-02-28 01:19:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:10.615549 | orchestrator | 2026-02-28 01:19:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:13.666390 | orchestrator | 2026-02-28 01:19:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:16.708528 | orchestrator | 2026-02-28 01:19:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:19.751760 | orchestrator | 2026-02-28 01:19:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:22.797994 | orchestrator | 2026-02-28 01:19:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:25.846985 | orchestrator | 2026-02-28 01:19:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:28.882651 | orchestrator | 2026-02-28 01:19:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:31.930699 | orchestrator | 2026-02-28 01:19:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:34.978925 | orchestrator | 2026-02-28 01:19:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:38.023115 | orchestrator | 2026-02-28 01:19:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-28 01:19:41.065410 | orchestrator | 2026-02-28 01:19:41.500693 | orchestrator | 2026-02-28 01:19:41.509379 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Feb 28 01:19:41 UTC 2026 2026-02-28 01:19:41.509652 | orchestrator | 2026-02-28 01:19:41.832275 | orchestrator | ok: Runtime: 0:38:54.771805 2026-02-28 01:19:42.088939 | 2026-02-28 01:19:42.089144 | TASK [Bootstrap services] 2026-02-28 01:19:42.891856 | orchestrator | 2026-02-28 01:19:42.892056 | orchestrator | # BOOTSTRAP 2026-02-28 01:19:42.892110 | orchestrator | 2026-02-28 01:19:42.892138 | orchestrator | + set -e 2026-02-28 01:19:42.892160 | orchestrator | + echo 2026-02-28 01:19:42.892179 | orchestrator | + echo '# BOOTSTRAP' 2026-02-28 01:19:42.892200 | orchestrator | + echo 2026-02-28 01:19:42.892245 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-28 01:19:42.905361 | orchestrator | + set -e 2026-02-28 01:19:42.905456 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-28 01:19:48.758996 | orchestrator | 2026-02-28 01:19:48 | INFO  | It takes a moment until task a1d7f907-096a-4b51-8573-41cead757873 (flavor-manager) has been started and output is visible here. 2026-02-28 01:19:57.414781 | orchestrator | 2026-02-28 01:19:52 | INFO  | Flavor SCS-1L-1 created 2026-02-28 01:19:57.414871 | orchestrator | 2026-02-28 01:19:52 | INFO  | Flavor SCS-1L-1-5 created 2026-02-28 01:19:57.414889 | orchestrator | 2026-02-28 01:19:53 | INFO  | Flavor SCS-1V-2 created 2026-02-28 01:19:57.414894 | orchestrator | 2026-02-28 01:19:53 | INFO  | Flavor SCS-1V-2-5 created 2026-02-28 01:19:57.414899 | orchestrator | 2026-02-28 01:19:53 | INFO  | Flavor SCS-1V-4 created 2026-02-28 01:19:57.414904 | orchestrator | 2026-02-28 01:19:53 | INFO  | Flavor SCS-1V-4-10 created 2026-02-28 01:19:57.414908 | orchestrator | 2026-02-28 01:19:53 | INFO  | Flavor SCS-1V-8 created 2026-02-28 01:19:57.414913 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-1V-8-20 created 2026-02-28 01:19:57.414929 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-2V-4 created 2026-02-28 01:19:57.414934 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-2V-4-10 created 2026-02-28 01:19:57.414939 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-2V-8 created 2026-02-28 01:19:57.414943 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-2V-8-20 created 2026-02-28 01:19:57.414948 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-2V-16 created 2026-02-28 01:19:57.414952 | orchestrator | 2026-02-28 01:19:54 | INFO  | Flavor SCS-2V-16-50 created 2026-02-28 01:19:57.414957 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-4V-8 created 2026-02-28 01:19:57.414961 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-4V-8-20 created 2026-02-28 01:19:57.414965 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-4V-16 created 2026-02-28 01:19:57.414970 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-4V-16-50 created 2026-02-28 01:19:57.414975 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-4V-32 created 2026-02-28 01:19:57.414982 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-4V-32-100 created 2026-02-28 01:19:57.414990 | orchestrator | 2026-02-28 01:19:55 | INFO  | Flavor SCS-8V-16 created 2026-02-28 01:19:57.414996 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-8V-16-50 created 2026-02-28 01:19:57.415004 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-8V-32 created 2026-02-28 01:19:57.415011 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-8V-32-100 created 2026-02-28 01:19:57.415018 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-16V-32 created 2026-02-28 01:19:57.415026 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-16V-32-100 created 2026-02-28 01:19:57.415034 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-2V-4-20s created 2026-02-28 01:19:57.415041 | orchestrator | 2026-02-28 01:19:56 | INFO  | Flavor SCS-4V-8-50s created 2026-02-28 01:19:57.415048 | orchestrator | 2026-02-28 01:19:57 | INFO  | Flavor SCS-4V-16-100s created 2026-02-28 01:19:57.415055 | orchestrator | 2026-02-28 01:19:57 | INFO  | Flavor SCS-8V-32-100s created 2026-02-28 01:20:00.052400 | orchestrator | 2026-02-28 01:20:00 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-28 01:20:00.065445 | orchestrator | 2026-02-28 01:20:00 | INFO  | Prepare task for execution of bootstrap-basic. 2026-02-28 01:20:00.143209 | orchestrator | 2026-02-28 01:20:00 | INFO  | Task c25609dc-dcf5-405a-8d47-abcf170506cf (bootstrap-basic) was prepared for execution. 2026-02-28 01:20:00.143324 | orchestrator | 2026-02-28 01:20:00 | INFO  | It takes a moment until task c25609dc-dcf5-405a-8d47-abcf170506cf (bootstrap-basic) has been started and output is visible here. 2026-02-28 01:20:52.273674 | orchestrator | 2026-02-28 01:20:52.273783 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-28 01:20:52.273800 | orchestrator | 2026-02-28 01:20:52.273813 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-28 01:20:52.273824 | orchestrator | Saturday 28 February 2026 01:20:04 +0000 (0:00:00.083) 0:00:00.083 ***** 2026-02-28 01:20:52.273835 | orchestrator | ok: [localhost] 2026-02-28 01:20:52.273847 | orchestrator | 2026-02-28 01:20:52.273858 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-28 01:20:52.273870 | orchestrator | Saturday 28 February 2026 01:20:07 +0000 (0:00:02.193) 0:00:02.277 ***** 2026-02-28 01:20:52.273905 | orchestrator | ok: [localhost] 2026-02-28 01:20:52.273929 | orchestrator | 2026-02-28 01:20:52.273941 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-28 01:20:52.273952 | orchestrator | Saturday 28 February 2026 01:20:17 +0000 (0:00:10.320) 0:00:12.598 ***** 2026-02-28 01:20:52.273963 | orchestrator | changed: [localhost] 2026-02-28 01:20:52.273975 | orchestrator | 2026-02-28 01:20:52.273986 | orchestrator | TASK [Create public network] *************************************************** 2026-02-28 01:20:52.273998 | orchestrator | Saturday 28 February 2026 01:20:25 +0000 (0:00:08.226) 0:00:20.824 ***** 2026-02-28 01:20:52.274009 | orchestrator | changed: [localhost] 2026-02-28 01:20:52.274086 | orchestrator | 2026-02-28 01:20:52.274105 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-28 01:20:52.274117 | orchestrator | Saturday 28 February 2026 01:20:31 +0000 (0:00:05.661) 0:00:26.486 ***** 2026-02-28 01:20:52.274128 | orchestrator | changed: [localhost] 2026-02-28 01:20:52.274139 | orchestrator | 2026-02-28 01:20:52.274151 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-28 01:20:52.274162 | orchestrator | Saturday 28 February 2026 01:20:38 +0000 (0:00:07.284) 0:00:33.771 ***** 2026-02-28 01:20:52.274173 | orchestrator | changed: [localhost] 2026-02-28 01:20:52.274184 | orchestrator | 2026-02-28 01:20:52.274197 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-28 01:20:52.274211 | orchestrator | Saturday 28 February 2026 01:20:43 +0000 (0:00:04.764) 0:00:38.535 ***** 2026-02-28 01:20:52.274271 | orchestrator | changed: [localhost] 2026-02-28 01:20:52.274297 | orchestrator | 2026-02-28 01:20:52.274317 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-28 01:20:52.274349 | orchestrator | Saturday 28 February 2026 01:20:47 +0000 (0:00:04.411) 0:00:42.946 ***** 2026-02-28 01:20:52.274369 | orchestrator | ok: [localhost] 2026-02-28 01:20:52.274388 | orchestrator | 2026-02-28 01:20:52.274409 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-28 01:20:52.274429 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-28 01:20:52.274450 | orchestrator | 2026-02-28 01:20:52.274470 | orchestrator | 2026-02-28 01:20:52.274489 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-28 01:20:52.274504 | orchestrator | Saturday 28 February 2026 01:20:51 +0000 (0:00:04.209) 0:00:47.156 ***** 2026-02-28 01:20:52.274517 | orchestrator | =============================================================================== 2026-02-28 01:20:52.274530 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.32s 2026-02-28 01:20:52.274569 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.23s 2026-02-28 01:20:52.274581 | orchestrator | Set public network to default ------------------------------------------- 7.28s 2026-02-28 01:20:52.274592 | orchestrator | Create public network --------------------------------------------------- 5.66s 2026-02-28 01:20:52.274603 | orchestrator | Create public subnet ---------------------------------------------------- 4.76s 2026-02-28 01:20:52.274614 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.41s 2026-02-28 01:20:52.274624 | orchestrator | Create manager role ----------------------------------------------------- 4.21s 2026-02-28 01:20:52.274635 | orchestrator | Gathering Facts --------------------------------------------------------- 2.19s 2026-02-28 01:20:55.184029 | orchestrator | 2026-02-28 01:20:55 | INFO  | It takes a moment until task 6010e1c8-cb85-443f-b176-522fa62fa1b2 (image-manager) has been started and output is visible here. 2026-02-28 01:21:39.170564 | orchestrator | 2026-02-28 01:20:58 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-28 01:21:39.170713 | orchestrator | 2026-02-28 01:20:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-28 01:21:39.170743 | orchestrator | 2026-02-28 01:20:58 | INFO  | Importing image Cirros 0.6.2 2026-02-28 01:21:39.170764 | orchestrator | 2026-02-28 01:20:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-28 01:21:39.170785 | orchestrator | 2026-02-28 01:21:01 | INFO  | Waiting for image to leave queued state... 2026-02-28 01:21:39.170806 | orchestrator | 2026-02-28 01:21:03 | INFO  | Waiting for import to complete... 2026-02-28 01:21:39.170826 | orchestrator | 2026-02-28 01:21:13 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-28 01:21:39.170847 | orchestrator | 2026-02-28 01:21:13 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-28 01:21:39.170868 | orchestrator | 2026-02-28 01:21:13 | INFO  | Setting internal_version = 0.6.2 2026-02-28 01:21:39.170887 | orchestrator | 2026-02-28 01:21:13 | INFO  | Setting image_original_user = cirros 2026-02-28 01:21:39.170908 | orchestrator | 2026-02-28 01:21:13 | INFO  | Adding tag os:cirros 2026-02-28 01:21:39.170928 | orchestrator | 2026-02-28 01:21:14 | INFO  | Setting property architecture: x86_64 2026-02-28 01:21:39.170949 | orchestrator | 2026-02-28 01:21:14 | INFO  | Setting property hw_disk_bus: scsi 2026-02-28 01:21:39.170969 | orchestrator | 2026-02-28 01:21:14 | INFO  | Setting property hw_rng_model: virtio 2026-02-28 01:21:39.170987 | orchestrator | 2026-02-28 01:21:15 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-28 01:21:39.171007 | orchestrator | 2026-02-28 01:21:15 | INFO  | Setting property hw_watchdog_action: reset 2026-02-28 01:21:39.171025 | orchestrator | 2026-02-28 01:21:15 | INFO  | Setting property hypervisor_type: qemu 2026-02-28 01:21:39.171058 | orchestrator | 2026-02-28 01:21:15 | INFO  | Setting property os_distro: cirros 2026-02-28 01:21:39.171079 | orchestrator | 2026-02-28 01:21:16 | INFO  | Setting property os_purpose: minimal 2026-02-28 01:21:39.171101 | orchestrator | 2026-02-28 01:21:16 | INFO  | Setting property replace_frequency: never 2026-02-28 01:21:39.171123 | orchestrator | 2026-02-28 01:21:16 | INFO  | Setting property uuid_validity: none 2026-02-28 01:21:39.171144 | orchestrator | 2026-02-28 01:21:16 | INFO  | Setting property provided_until: none 2026-02-28 01:21:39.171164 | orchestrator | 2026-02-28 01:21:17 | INFO  | Setting property image_description: Cirros 2026-02-28 01:21:39.171185 | orchestrator | 2026-02-28 01:21:17 | INFO  | Setting property image_name: Cirros 2026-02-28 01:21:39.171273 | orchestrator | 2026-02-28 01:21:17 | INFO  | Setting property internal_version: 0.6.2 2026-02-28 01:21:39.171297 | orchestrator | 2026-02-28 01:21:18 | INFO  | Setting property image_original_user: cirros 2026-02-28 01:21:39.171316 | orchestrator | 2026-02-28 01:21:18 | INFO  | Setting property os_version: 0.6.2 2026-02-28 01:21:39.171335 | orchestrator | 2026-02-28 01:21:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-28 01:21:39.171356 | orchestrator | 2026-02-28 01:21:18 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-28 01:21:39.171374 | orchestrator | 2026-02-28 01:21:19 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-28 01:21:39.171392 | orchestrator | 2026-02-28 01:21:19 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-28 01:21:39.171416 | orchestrator | 2026-02-28 01:21:19 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-28 01:21:39.171434 | orchestrator | 2026-02-28 01:21:19 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-28 01:21:39.171452 | orchestrator | 2026-02-28 01:21:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-28 01:21:39.171469 | orchestrator | 2026-02-28 01:21:19 | INFO  | Importing image Cirros 0.6.3 2026-02-28 01:21:39.171487 | orchestrator | 2026-02-28 01:21:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-28 01:21:39.171505 | orchestrator | 2026-02-28 01:21:20 | INFO  | Waiting for image to leave queued state... 2026-02-28 01:21:39.171523 | orchestrator | 2026-02-28 01:21:22 | INFO  | Waiting for import to complete... 2026-02-28 01:21:39.171570 | orchestrator | 2026-02-28 01:21:32 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-28 01:21:39.171591 | orchestrator | 2026-02-28 01:21:32 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-28 01:21:39.171609 | orchestrator | 2026-02-28 01:21:32 | INFO  | Setting internal_version = 0.6.3 2026-02-28 01:21:39.171627 | orchestrator | 2026-02-28 01:21:32 | INFO  | Setting image_original_user = cirros 2026-02-28 01:21:39.171645 | orchestrator | 2026-02-28 01:21:32 | INFO  | Adding tag os:cirros 2026-02-28 01:21:39.171664 | orchestrator | 2026-02-28 01:21:33 | INFO  | Setting property architecture: x86_64 2026-02-28 01:21:39.171682 | orchestrator | 2026-02-28 01:21:33 | INFO  | Setting property hw_disk_bus: scsi 2026-02-28 01:21:39.171700 | orchestrator | 2026-02-28 01:21:33 | INFO  | Setting property hw_rng_model: virtio 2026-02-28 01:21:39.171718 | orchestrator | 2026-02-28 01:21:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-28 01:21:39.171737 | orchestrator | 2026-02-28 01:21:34 | INFO  | Setting property hw_watchdog_action: reset 2026-02-28 01:21:39.171756 | orchestrator | 2026-02-28 01:21:34 | INFO  | Setting property hypervisor_type: qemu 2026-02-28 01:21:39.171774 | orchestrator | 2026-02-28 01:21:34 | INFO  | Setting property os_distro: cirros 2026-02-28 01:21:39.171792 | orchestrator | 2026-02-28 01:21:34 | INFO  | Setting property os_purpose: minimal 2026-02-28 01:21:39.171809 | orchestrator | 2026-02-28 01:21:34 | INFO  | Setting property replace_frequency: never 2026-02-28 01:21:39.171827 | orchestrator | 2026-02-28 01:21:35 | INFO  | Setting property uuid_validity: none 2026-02-28 01:21:39.171844 | orchestrator | 2026-02-28 01:21:35 | INFO  | Setting property provided_until: none 2026-02-28 01:21:39.171862 | orchestrator | 2026-02-28 01:21:35 | INFO  | Setting property image_description: Cirros 2026-02-28 01:21:39.171898 | orchestrator | 2026-02-28 01:21:35 | INFO  | Setting property image_name: Cirros 2026-02-28 01:21:39.171917 | orchestrator | 2026-02-28 01:21:36 | INFO  | Setting property internal_version: 0.6.3 2026-02-28 01:21:39.171935 | orchestrator | 2026-02-28 01:21:36 | INFO  | Setting property image_original_user: cirros 2026-02-28 01:21:39.171955 | orchestrator | 2026-02-28 01:21:37 | INFO  | Setting property os_version: 0.6.3 2026-02-28 01:21:39.171974 | orchestrator | 2026-02-28 01:21:37 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-28 01:21:39.171994 | orchestrator | 2026-02-28 01:21:37 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-28 01:21:39.172011 | orchestrator | 2026-02-28 01:21:38 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-28 01:21:39.172030 | orchestrator | 2026-02-28 01:21:38 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-28 01:21:39.172048 | orchestrator | 2026-02-28 01:21:38 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-28 01:21:39.595520 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-28 01:21:42.308707 | orchestrator | 2026-02-28 01:21:42 | INFO  | date: 2026-02-27 2026-02-28 01:21:42.308808 | orchestrator | 2026-02-28 01:21:42 | INFO  | image: octavia-amphora-haproxy-2024.2.20260227.qcow2 2026-02-28 01:21:42.308845 | orchestrator | 2026-02-28 01:21:42 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260227.qcow2 2026-02-28 01:21:42.308860 | orchestrator | 2026-02-28 01:21:42 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260227.qcow2.CHECKSUM 2026-02-28 01:21:42.409543 | orchestrator | 2026-02-28 01:21:42 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/work/logs" 2026-02-28 01:22:15.968462 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/work/artifacts" 2026-02-28 01:22:16.258787 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e667d719f59143fa93177324baaeaa58/work/docs" 2026-02-28 01:22:16.271146 | 2026-02-28 01:22:16.271286 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-28 01:22:17.190355 | orchestrator | changed: .d..t...... ./ 2026-02-28 01:22:17.190701 | orchestrator | changed: All items complete 2026-02-28 01:22:17.190761 | 2026-02-28 01:22:17.943030 | orchestrator | changed: .d..t...... ./ 2026-02-28 01:22:18.680150 | orchestrator | changed: .d..t...... ./ 2026-02-28 01:22:18.708826 | 2026-02-28 01:22:18.708981 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-28 01:22:18.740093 | orchestrator | skipping: Conditional result was False 2026-02-28 01:22:18.743545 | orchestrator | skipping: Conditional result was False 2026-02-28 01:22:18.754482 | 2026-02-28 01:22:18.754586 | PLAY RECAP 2026-02-28 01:22:18.754647 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-28 01:22:18.754676 | 2026-02-28 01:22:18.889332 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-28 01:22:18.893659 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-28 01:22:19.675867 | 2026-02-28 01:22:19.676056 | PLAY [Base post] 2026-02-28 01:22:19.692871 | 2026-02-28 01:22:19.693043 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-28 01:22:20.959853 | orchestrator | changed 2026-02-28 01:22:20.970580 | 2026-02-28 01:22:20.970706 | PLAY RECAP 2026-02-28 01:22:20.970787 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-28 01:22:20.970888 | 2026-02-28 01:22:21.092251 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-28 01:22:21.094804 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-28 01:22:21.930149 | 2026-02-28 01:22:21.930324 | PLAY [Base post-logs] 2026-02-28 01:22:21.941329 | 2026-02-28 01:22:21.941479 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-28 01:22:22.387884 | localhost | changed 2026-02-28 01:22:22.404117 | 2026-02-28 01:22:22.404315 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-28 01:22:22.441354 | localhost | ok 2026-02-28 01:22:22.445769 | 2026-02-28 01:22:22.445904 | TASK [Set zuul-log-path fact] 2026-02-28 01:22:22.473347 | localhost | ok 2026-02-28 01:22:22.489623 | 2026-02-28 01:22:22.489800 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-28 01:22:22.528315 | localhost | ok 2026-02-28 01:22:22.534976 | 2026-02-28 01:22:22.535169 | TASK [upload-logs : Create log directories] 2026-02-28 01:22:23.020527 | localhost | changed 2026-02-28 01:22:23.023728 | 2026-02-28 01:22:23.023848 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-28 01:22:23.524800 | localhost -> localhost | ok: Runtime: 0:00:00.007379 2026-02-28 01:22:23.535801 | 2026-02-28 01:22:23.536893 | TASK [upload-logs : Upload logs to log server] 2026-02-28 01:22:24.150296 | localhost | Output suppressed because no_log was given 2026-02-28 01:22:24.154630 | 2026-02-28 01:22:24.154828 | LOOP [upload-logs : Compress console log and json output] 2026-02-28 01:22:24.210575 | localhost | skipping: Conditional result was False 2026-02-28 01:22:24.216226 | localhost | skipping: Conditional result was False 2026-02-28 01:22:24.227945 | 2026-02-28 01:22:24.228161 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-28 01:22:24.275496 | localhost | skipping: Conditional result was False 2026-02-28 01:22:24.276162 | 2026-02-28 01:22:24.280145 | localhost | skipping: Conditional result was False 2026-02-28 01:22:24.293536 | 2026-02-28 01:22:24.293795 | LOOP [upload-logs : Upload console log and json output]