2026-02-23 19:45:25.280152 | Job console starting 2026-02-23 19:45:25.309934 | Updating git repos 2026-02-23 19:45:25.403773 | Cloning repos into workspace 2026-02-23 19:45:25.743485 | Restoring repo states 2026-02-23 19:45:25.781725 | Merging changes 2026-02-23 19:45:26.449012 | Checking out repos 2026-02-23 19:45:26.926988 | Preparing playbooks 2026-02-23 19:45:28.079446 | Running Ansible setup 2026-02-23 19:45:33.936279 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-23 19:45:34.932224 | 2026-02-23 19:45:34.932412 | PLAY [Base pre] 2026-02-23 19:45:34.963109 | 2026-02-23 19:45:34.963294 | TASK [Setup log path fact] 2026-02-23 19:45:34.984279 | orchestrator | ok 2026-02-23 19:45:35.010318 | 2026-02-23 19:45:35.010514 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-23 19:45:35.047399 | orchestrator | ok 2026-02-23 19:45:35.073332 | 2026-02-23 19:45:35.073498 | TASK [emit-job-header : Print job information] 2026-02-23 19:45:35.125433 | # Job Information 2026-02-23 19:45:35.125635 | Ansible Version: 2.16.14 2026-02-23 19:45:35.125669 | Job: testbed-deploy-current-in-a-nutshell-ubuntu-24.04 2026-02-23 19:45:35.125702 | Pipeline: label 2026-02-23 19:45:35.125725 | Executor: 521e9411259a 2026-02-23 19:45:35.125745 | Triggered by: https://github.com/osism/testbed/pull/2849 2026-02-23 19:45:35.125767 | Event ID: 2c827a60-10f0-11f1-956a-694893e4bc8a 2026-02-23 19:45:35.141174 | 2026-02-23 19:45:35.141394 | LOOP [emit-job-header : Print node information] 2026-02-23 19:45:35.400959 | orchestrator | ok: 2026-02-23 19:45:35.401166 | orchestrator | # Node Information 2026-02-23 19:45:35.401200 | orchestrator | Inventory Hostname: orchestrator 2026-02-23 19:45:35.401224 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-23 19:45:35.401259 | orchestrator | Username: zuul-testbed06 2026-02-23 19:45:35.401282 | orchestrator | Distro: Debian 12.13 2026-02-23 19:45:35.401306 | orchestrator | Provider: static-testbed 2026-02-23 19:45:35.401328 | orchestrator | Region: 2026-02-23 19:45:35.401349 | orchestrator | Label: testbed-orchestrator 2026-02-23 19:45:35.401369 | orchestrator | Product Name: OpenStack Nova 2026-02-23 19:45:35.401388 | orchestrator | Interface IP: 81.163.193.140 2026-02-23 19:45:35.417555 | 2026-02-23 19:45:35.417699 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-23 19:45:36.029994 | orchestrator -> localhost | changed 2026-02-23 19:45:36.038293 | 2026-02-23 19:45:36.038419 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-23 19:45:37.572357 | orchestrator -> localhost | changed 2026-02-23 19:45:37.605800 | 2026-02-23 19:45:37.605914 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-23 19:45:38.018488 | orchestrator -> localhost | ok 2026-02-23 19:45:38.025110 | 2026-02-23 19:45:38.025215 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-23 19:45:38.053772 | orchestrator | ok 2026-02-23 19:45:38.082411 | orchestrator | included: /var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-23 19:45:38.099764 | 2026-02-23 19:45:38.099873 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-23 19:45:39.422417 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-23 19:45:39.422728 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/work/66cbaed88017496cb520464d388d0f6f_id_rsa 2026-02-23 19:45:39.422790 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/work/66cbaed88017496cb520464d388d0f6f_id_rsa.pub 2026-02-23 19:45:39.422828 | orchestrator -> localhost | The key fingerprint is: 2026-02-23 19:45:39.422908 | orchestrator -> localhost | SHA256:ay7dYxTCk0CipMG1eN0B0ibvcd4e6CZqznpToc4BIew zuul-build-sshkey 2026-02-23 19:45:39.422942 | orchestrator -> localhost | The key's randomart image is: 2026-02-23 19:45:39.422986 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-23 19:45:39.423018 | orchestrator -> localhost | |+ ooooo. | 2026-02-23 19:45:39.423048 | orchestrator -> localhost | |.B.o++o . | 2026-02-23 19:45:39.423077 | orchestrator -> localhost | |+.oo+. + . | 2026-02-23 19:45:39.423106 | orchestrator -> localhost | | E. + .= . | 2026-02-23 19:45:39.423135 | orchestrator -> localhost | | . o = So . | 2026-02-23 19:45:39.423167 | orchestrator -> localhost | | o o o +. | 2026-02-23 19:45:39.423196 | orchestrator -> localhost | | o o ..+o. | 2026-02-23 19:45:39.423224 | orchestrator -> localhost | | .* ..=..+ | 2026-02-23 19:45:39.423267 | orchestrator -> localhost | | .=+o o... . | 2026-02-23 19:45:39.423299 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-23 19:45:39.423373 | orchestrator -> localhost | ok: Runtime: 0:00:00.602455 2026-02-23 19:45:39.434724 | 2026-02-23 19:45:39.434901 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-23 19:45:39.476286 | orchestrator | ok 2026-02-23 19:45:39.491175 | orchestrator | included: /var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-23 19:45:39.503000 | 2026-02-23 19:45:39.503113 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-23 19:45:39.526401 | orchestrator | skipping: Conditional result was False 2026-02-23 19:45:39.534053 | 2026-02-23 19:45:39.534172 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-23 19:45:40.159410 | orchestrator | changed 2026-02-23 19:45:40.165805 | 2026-02-23 19:45:40.165897 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-23 19:45:40.429732 | orchestrator | ok 2026-02-23 19:45:40.436944 | 2026-02-23 19:45:40.437046 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-23 19:45:40.874656 | orchestrator | ok 2026-02-23 19:45:40.886173 | 2026-02-23 19:45:40.886330 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-23 19:45:41.337531 | orchestrator | ok 2026-02-23 19:45:41.359555 | 2026-02-23 19:45:41.359701 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-23 19:45:41.434723 | orchestrator | skipping: Conditional result was False 2026-02-23 19:45:41.442386 | 2026-02-23 19:45:41.442517 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-23 19:45:42.075396 | orchestrator -> localhost | changed 2026-02-23 19:45:42.094537 | 2026-02-23 19:45:42.094773 | TASK [add-build-sshkey : Add back temp key] 2026-02-23 19:45:42.467781 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/work/66cbaed88017496cb520464d388d0f6f_id_rsa (zuul-build-sshkey) 2026-02-23 19:45:42.468036 | orchestrator -> localhost | ok: Runtime: 0:00:00.019007 2026-02-23 19:45:42.476897 | 2026-02-23 19:45:42.477024 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-23 19:45:42.971649 | orchestrator | ok 2026-02-23 19:45:42.987987 | 2026-02-23 19:45:42.988177 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-23 19:45:43.042928 | orchestrator | skipping: Conditional result was False 2026-02-23 19:45:43.159311 | 2026-02-23 19:45:43.159458 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-23 19:45:43.597317 | orchestrator | ok 2026-02-23 19:45:43.619012 | 2026-02-23 19:45:43.619169 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-23 19:45:43.666853 | orchestrator | ok 2026-02-23 19:45:43.680191 | 2026-02-23 19:45:43.680375 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-23 19:45:44.045797 | orchestrator -> localhost | ok 2026-02-23 19:45:44.053476 | 2026-02-23 19:45:44.053606 | TASK [validate-host : Collect information about the host] 2026-02-23 19:45:45.320201 | orchestrator | ok 2026-02-23 19:45:45.336107 | 2026-02-23 19:45:45.336241 | TASK [validate-host : Sanitize hostname] 2026-02-23 19:45:45.404504 | orchestrator | ok 2026-02-23 19:45:45.420121 | 2026-02-23 19:45:45.420292 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-23 19:45:46.055227 | orchestrator -> localhost | changed 2026-02-23 19:45:46.062075 | 2026-02-23 19:45:46.062282 | TASK [validate-host : Collect information about zuul worker] 2026-02-23 19:45:46.603086 | orchestrator | ok 2026-02-23 19:45:46.608504 | 2026-02-23 19:45:46.608628 | TASK [validate-host : Write out all zuul information for each host] 2026-02-23 19:45:47.193163 | orchestrator -> localhost | changed 2026-02-23 19:45:47.204403 | 2026-02-23 19:45:47.204537 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-23 19:45:47.490695 | orchestrator | ok 2026-02-23 19:45:47.497210 | 2026-02-23 19:45:47.497372 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-23 19:47:00.217659 | orchestrator | changed: 2026-02-23 19:47:00.217895 | orchestrator | .d..t...... src/ 2026-02-23 19:47:00.217932 | orchestrator | .d..t...... src/github.com/ 2026-02-23 19:47:00.217958 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-23 19:47:00.217980 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-23 19:47:00.218001 | orchestrator | RedHat.yml 2026-02-23 19:47:00.232631 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-23 19:47:00.232648 | orchestrator | RedHat.yml 2026-02-23 19:47:00.232699 | orchestrator | = 1.53.0"... 2026-02-23 19:47:18.142582 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-23 19:47:18.293493 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-23 19:47:18.828565 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-23 19:47:19.484895 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-23 19:47:20.355913 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-23 19:47:20.652377 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-23 19:47:21.374789 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-23 19:47:21.374926 | orchestrator | 2026-02-23 19:47:21.374935 | orchestrator | Providers are signed by their developers. 2026-02-23 19:47:21.374940 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-23 19:47:21.374945 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-23 19:47:21.374951 | orchestrator | 2026-02-23 19:47:21.374956 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-23 19:47:21.374960 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-23 19:47:21.374969 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-23 19:47:21.374973 | orchestrator | you run "tofu init" in the future. 2026-02-23 19:47:21.375188 | orchestrator | 2026-02-23 19:47:21.375203 | orchestrator | OpenTofu has been successfully initialized! 2026-02-23 19:47:21.375207 | orchestrator | 2026-02-23 19:47:21.375217 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-23 19:47:21.375221 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-23 19:47:21.375225 | orchestrator | should now work. 2026-02-23 19:47:21.375229 | orchestrator | 2026-02-23 19:47:21.375232 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-23 19:47:21.375236 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-23 19:47:21.375240 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-23 19:47:21.559018 | orchestrator | Created and switched to workspace "ci"! 2026-02-23 19:47:21.559066 | orchestrator | 2026-02-23 19:47:21.559072 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-23 19:47:21.559078 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-23 19:47:21.559096 | orchestrator | for this configuration. 2026-02-23 19:47:21.681093 | orchestrator | ci.auto.tfvars 2026-02-23 19:47:21.683752 | orchestrator | default_custom.tf 2026-02-23 19:47:22.747977 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-23 19:47:23.805541 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-23 19:47:24.054083 | orchestrator | 2026-02-23 19:47:24.054150 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-23 19:47:24.054159 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-23 19:47:24.054164 | orchestrator | + create 2026-02-23 19:47:24.054168 | orchestrator | <= read (data resources) 2026-02-23 19:47:24.054173 | orchestrator | 2026-02-23 19:47:24.054186 | orchestrator | OpenTofu will perform the following actions: 2026-02-23 19:47:24.054190 | orchestrator | 2026-02-23 19:47:24.054195 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-23 19:47:24.054199 | orchestrator | # (config refers to values not yet known) 2026-02-23 19:47:24.054203 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-23 19:47:24.054207 | orchestrator | + checksum = (known after apply) 2026-02-23 19:47:24.054211 | orchestrator | + created_at = (known after apply) 2026-02-23 19:47:24.054215 | orchestrator | + file = (known after apply) 2026-02-23 19:47:24.054219 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054242 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054246 | orchestrator | + min_disk_gb = (known after apply) 2026-02-23 19:47:24.054250 | orchestrator | + min_ram_mb = (known after apply) 2026-02-23 19:47:24.054253 | orchestrator | + most_recent = true 2026-02-23 19:47:24.054258 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.054261 | orchestrator | + protected = (known after apply) 2026-02-23 19:47:24.054265 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054272 | orchestrator | + schema = (known after apply) 2026-02-23 19:47:24.054275 | orchestrator | + size_bytes = (known after apply) 2026-02-23 19:47:24.054279 | orchestrator | + tags = (known after apply) 2026-02-23 19:47:24.054283 | orchestrator | + updated_at = (known after apply) 2026-02-23 19:47:24.054287 | orchestrator | } 2026-02-23 19:47:24.054291 | orchestrator | 2026-02-23 19:47:24.054295 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-23 19:47:24.054299 | orchestrator | # (config refers to values not yet known) 2026-02-23 19:47:24.054303 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-23 19:47:24.054307 | orchestrator | + checksum = (known after apply) 2026-02-23 19:47:24.054310 | orchestrator | + created_at = (known after apply) 2026-02-23 19:47:24.054314 | orchestrator | + file = (known after apply) 2026-02-23 19:47:24.054318 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054321 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054325 | orchestrator | + min_disk_gb = (known after apply) 2026-02-23 19:47:24.054329 | orchestrator | + min_ram_mb = (known after apply) 2026-02-23 19:47:24.054332 | orchestrator | + most_recent = true 2026-02-23 19:47:24.054337 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.054340 | orchestrator | + protected = (known after apply) 2026-02-23 19:47:24.054344 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054348 | orchestrator | + schema = (known after apply) 2026-02-23 19:47:24.054351 | orchestrator | + size_bytes = (known after apply) 2026-02-23 19:47:24.054355 | orchestrator | + tags = (known after apply) 2026-02-23 19:47:24.054359 | orchestrator | + updated_at = (known after apply) 2026-02-23 19:47:24.054363 | orchestrator | } 2026-02-23 19:47:24.054366 | orchestrator | 2026-02-23 19:47:24.054370 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-23 19:47:24.054374 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-23 19:47:24.054378 | orchestrator | + content = (known after apply) 2026-02-23 19:47:24.054404 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-23 19:47:24.054409 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-23 19:47:24.054412 | orchestrator | + content_md5 = (known after apply) 2026-02-23 19:47:24.054416 | orchestrator | + content_sha1 = (known after apply) 2026-02-23 19:47:24.054420 | orchestrator | + content_sha256 = (known after apply) 2026-02-23 19:47:24.054423 | orchestrator | + content_sha512 = (known after apply) 2026-02-23 19:47:24.054427 | orchestrator | + directory_permission = "0777" 2026-02-23 19:47:24.054431 | orchestrator | + file_permission = "0644" 2026-02-23 19:47:24.054435 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-23 19:47:24.054438 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054442 | orchestrator | } 2026-02-23 19:47:24.054445 | orchestrator | 2026-02-23 19:47:24.054449 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-23 19:47:24.054453 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-23 19:47:24.054457 | orchestrator | + content = (known after apply) 2026-02-23 19:47:24.054460 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-23 19:47:24.054464 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-23 19:47:24.054468 | orchestrator | + content_md5 = (known after apply) 2026-02-23 19:47:24.054471 | orchestrator | + content_sha1 = (known after apply) 2026-02-23 19:47:24.054475 | orchestrator | + content_sha256 = (known after apply) 2026-02-23 19:47:24.054479 | orchestrator | + content_sha512 = (known after apply) 2026-02-23 19:47:24.054482 | orchestrator | + directory_permission = "0777" 2026-02-23 19:47:24.054486 | orchestrator | + file_permission = "0644" 2026-02-23 19:47:24.054494 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-23 19:47:24.054498 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054502 | orchestrator | } 2026-02-23 19:47:24.054505 | orchestrator | 2026-02-23 19:47:24.054515 | orchestrator | # local_file.inventory will be created 2026-02-23 19:47:24.054518 | orchestrator | + resource "local_file" "inventory" { 2026-02-23 19:47:24.054522 | orchestrator | + content = (known after apply) 2026-02-23 19:47:24.054526 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-23 19:47:24.054529 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-23 19:47:24.054533 | orchestrator | + content_md5 = (known after apply) 2026-02-23 19:47:24.054537 | orchestrator | + content_sha1 = (known after apply) 2026-02-23 19:47:24.054540 | orchestrator | + content_sha256 = (known after apply) 2026-02-23 19:47:24.054544 | orchestrator | + content_sha512 = (known after apply) 2026-02-23 19:47:24.054548 | orchestrator | + directory_permission = "0777" 2026-02-23 19:47:24.054551 | orchestrator | + file_permission = "0644" 2026-02-23 19:47:24.054555 | orchestrator | + filename = "inventory.ci" 2026-02-23 19:47:24.054559 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054563 | orchestrator | } 2026-02-23 19:47:24.054566 | orchestrator | 2026-02-23 19:47:24.054570 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-23 19:47:24.054574 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-23 19:47:24.054577 | orchestrator | + content = (sensitive value) 2026-02-23 19:47:24.054581 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-23 19:47:24.054585 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-23 19:47:24.054588 | orchestrator | + content_md5 = (known after apply) 2026-02-23 19:47:24.054592 | orchestrator | + content_sha1 = (known after apply) 2026-02-23 19:47:24.054596 | orchestrator | + content_sha256 = (known after apply) 2026-02-23 19:47:24.054609 | orchestrator | + content_sha512 = (known after apply) 2026-02-23 19:47:24.054613 | orchestrator | + directory_permission = "0700" 2026-02-23 19:47:24.054617 | orchestrator | + file_permission = "0600" 2026-02-23 19:47:24.054620 | orchestrator | + filename = ".id_rsa.ci" 2026-02-23 19:47:24.054624 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054628 | orchestrator | } 2026-02-23 19:47:24.054631 | orchestrator | 2026-02-23 19:47:24.054635 | orchestrator | # null_resource.node_semaphore will be created 2026-02-23 19:47:24.054639 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-23 19:47:24.054642 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054646 | orchestrator | } 2026-02-23 19:47:24.054650 | orchestrator | 2026-02-23 19:47:24.054654 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-23 19:47:24.054657 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-23 19:47:24.054661 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.054665 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.054668 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054672 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.054676 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054679 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-23 19:47:24.054683 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054687 | orchestrator | + size = 80 2026-02-23 19:47:24.054690 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.054694 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.054698 | orchestrator | } 2026-02-23 19:47:24.054701 | orchestrator | 2026-02-23 19:47:24.054705 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-23 19:47:24.054709 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-23 19:47:24.054713 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.054716 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.054720 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054746 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.054750 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054754 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-23 19:47:24.054758 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054761 | orchestrator | + size = 80 2026-02-23 19:47:24.054765 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.054768 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.054772 | orchestrator | } 2026-02-23 19:47:24.054776 | orchestrator | 2026-02-23 19:47:24.054780 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-23 19:47:24.054783 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-23 19:47:24.054787 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.054791 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.054794 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054798 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.054802 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054806 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-23 19:47:24.054809 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054813 | orchestrator | + size = 80 2026-02-23 19:47:24.054817 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.054820 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.054824 | orchestrator | } 2026-02-23 19:47:24.054828 | orchestrator | 2026-02-23 19:47:24.054831 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-23 19:47:24.054835 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-23 19:47:24.054839 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.054842 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.054846 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054850 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.054853 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054857 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-23 19:47:24.054861 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054864 | orchestrator | + size = 80 2026-02-23 19:47:24.054868 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.054872 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.054875 | orchestrator | } 2026-02-23 19:47:24.054879 | orchestrator | 2026-02-23 19:47:24.054883 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-23 19:47:24.054886 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-23 19:47:24.054890 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.054894 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.054897 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054901 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.054904 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054911 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-23 19:47:24.054915 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054918 | orchestrator | + size = 80 2026-02-23 19:47:24.054922 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.054926 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.054929 | orchestrator | } 2026-02-23 19:47:24.054933 | orchestrator | 2026-02-23 19:47:24.054937 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-23 19:47:24.054940 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-23 19:47:24.054944 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.054948 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.054951 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.054964 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.054967 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.054971 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-23 19:47:24.054975 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.054978 | orchestrator | + size = 80 2026-02-23 19:47:24.054982 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.054985 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.054989 | orchestrator | } 2026-02-23 19:47:24.054993 | orchestrator | 2026-02-23 19:47:24.054996 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-23 19:47:24.055003 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-23 19:47:24.055007 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055011 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055014 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055018 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.055021 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055025 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-23 19:47:24.055029 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055032 | orchestrator | + size = 80 2026-02-23 19:47:24.055036 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055040 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055043 | orchestrator | } 2026-02-23 19:47:24.055047 | orchestrator | 2026-02-23 19:47:24.055051 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-23 19:47:24.055055 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055058 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055062 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055065 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055069 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055073 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-23 19:47:24.055076 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055080 | orchestrator | + size = 20 2026-02-23 19:47:24.055084 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055087 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055091 | orchestrator | } 2026-02-23 19:47:24.055095 | orchestrator | 2026-02-23 19:47:24.055098 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-23 19:47:24.055102 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055106 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055109 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055113 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055117 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055120 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-23 19:47:24.055124 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055127 | orchestrator | + size = 20 2026-02-23 19:47:24.055131 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055135 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055138 | orchestrator | } 2026-02-23 19:47:24.055142 | orchestrator | 2026-02-23 19:47:24.055146 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-23 19:47:24.055149 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055153 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055157 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055160 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055164 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055168 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-23 19:47:24.055171 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055179 | orchestrator | + size = 20 2026-02-23 19:47:24.055182 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055186 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055190 | orchestrator | } 2026-02-23 19:47:24.055193 | orchestrator | 2026-02-23 19:47:24.055197 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-23 19:47:24.055201 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055204 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055208 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055211 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055215 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055219 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-23 19:47:24.055222 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055226 | orchestrator | + size = 20 2026-02-23 19:47:24.055230 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055233 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055237 | orchestrator | } 2026-02-23 19:47:24.055240 | orchestrator | 2026-02-23 19:47:24.055244 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-23 19:47:24.055248 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055251 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055255 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055259 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055262 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055266 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-23 19:47:24.055270 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055276 | orchestrator | + size = 20 2026-02-23 19:47:24.055280 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055283 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055287 | orchestrator | } 2026-02-23 19:47:24.055291 | orchestrator | 2026-02-23 19:47:24.055294 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-23 19:47:24.055298 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055302 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055305 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055309 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055312 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055316 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-23 19:47:24.055320 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055323 | orchestrator | + size = 20 2026-02-23 19:47:24.055327 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055331 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055334 | orchestrator | } 2026-02-23 19:47:24.055338 | orchestrator | 2026-02-23 19:47:24.055341 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-23 19:47:24.055345 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055349 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055352 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055356 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055364 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055368 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-23 19:47:24.055371 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055375 | orchestrator | + size = 20 2026-02-23 19:47:24.055379 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055417 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055423 | orchestrator | } 2026-02-23 19:47:24.055429 | orchestrator | 2026-02-23 19:47:24.055434 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-23 19:47:24.055440 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055451 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055458 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055463 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055469 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055475 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-23 19:47:24.055480 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055487 | orchestrator | + size = 20 2026-02-23 19:47:24.055492 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055495 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055499 | orchestrator | } 2026-02-23 19:47:24.055503 | orchestrator | 2026-02-23 19:47:24.055506 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-23 19:47:24.055510 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-23 19:47:24.055513 | orchestrator | + attachment = (known after apply) 2026-02-23 19:47:24.055517 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055521 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055524 | orchestrator | + metadata = (known after apply) 2026-02-23 19:47:24.055528 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-23 19:47:24.055532 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055535 | orchestrator | + size = 20 2026-02-23 19:47:24.055539 | orchestrator | + volume_retype_policy = "never" 2026-02-23 19:47:24.055542 | orchestrator | + volume_type = "ssd" 2026-02-23 19:47:24.055546 | orchestrator | } 2026-02-23 19:47:24.055550 | orchestrator | 2026-02-23 19:47:24.055553 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-23 19:47:24.055557 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-23 19:47:24.055561 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.055564 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.055568 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.055572 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.055575 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055579 | orchestrator | + config_drive = true 2026-02-23 19:47:24.055582 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.055586 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.055590 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-23 19:47:24.055593 | orchestrator | + force_delete = false 2026-02-23 19:47:24.055597 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.055600 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055604 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.055608 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.055611 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.055615 | orchestrator | + name = "testbed-manager" 2026-02-23 19:47:24.055618 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.055622 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055626 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.055629 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.055633 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.055636 | orchestrator | + user_data = (sensitive value) 2026-02-23 19:47:24.055640 | orchestrator | 2026-02-23 19:47:24.055644 | orchestrator | + block_device { 2026-02-23 19:47:24.055647 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.055651 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.055658 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.055661 | orchestrator | + multiattach = false 2026-02-23 19:47:24.055665 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.055669 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.055676 | orchestrator | } 2026-02-23 19:47:24.055680 | orchestrator | 2026-02-23 19:47:24.055683 | orchestrator | + network { 2026-02-23 19:47:24.055687 | orchestrator | + access_network = false 2026-02-23 19:47:24.055690 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.055694 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.055698 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.055701 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.055705 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.055708 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.055712 | orchestrator | } 2026-02-23 19:47:24.055716 | orchestrator | } 2026-02-23 19:47:24.055719 | orchestrator | 2026-02-23 19:47:24.055723 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-23 19:47:24.055727 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-23 19:47:24.055730 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.055734 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.055737 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.055741 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.055745 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055748 | orchestrator | + config_drive = true 2026-02-23 19:47:24.055752 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.055755 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.055759 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-23 19:47:24.055762 | orchestrator | + force_delete = false 2026-02-23 19:47:24.055766 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.055770 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055773 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.055777 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.055780 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.055784 | orchestrator | + name = "testbed-node-0" 2026-02-23 19:47:24.055788 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.055794 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055798 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.055801 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.055805 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.055809 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-23 19:47:24.055812 | orchestrator | 2026-02-23 19:47:24.055816 | orchestrator | + block_device { 2026-02-23 19:47:24.055819 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.055823 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.055827 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.055830 | orchestrator | + multiattach = false 2026-02-23 19:47:24.055834 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.055837 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.055841 | orchestrator | } 2026-02-23 19:47:24.055845 | orchestrator | 2026-02-23 19:47:24.055848 | orchestrator | + network { 2026-02-23 19:47:24.055852 | orchestrator | + access_network = false 2026-02-23 19:47:24.055855 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.055859 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.055863 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.055866 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.055870 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.055874 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.055877 | orchestrator | } 2026-02-23 19:47:24.055881 | orchestrator | } 2026-02-23 19:47:24.055885 | orchestrator | 2026-02-23 19:47:24.055888 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-23 19:47:24.055892 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-23 19:47:24.055895 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.055902 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.055906 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.055910 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.055913 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.055917 | orchestrator | + config_drive = true 2026-02-23 19:47:24.055920 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.055924 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.055928 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-23 19:47:24.055931 | orchestrator | + force_delete = false 2026-02-23 19:47:24.055935 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.055938 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.055942 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.055946 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.055949 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.055953 | orchestrator | + name = "testbed-node-1" 2026-02-23 19:47:24.055956 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.055960 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.055964 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.055967 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.055971 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.055974 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-23 19:47:24.055978 | orchestrator | 2026-02-23 19:47:24.055982 | orchestrator | + block_device { 2026-02-23 19:47:24.055985 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.055989 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.055992 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.055996 | orchestrator | + multiattach = false 2026-02-23 19:47:24.056000 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.056003 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056007 | orchestrator | } 2026-02-23 19:47:24.056010 | orchestrator | 2026-02-23 19:47:24.056014 | orchestrator | + network { 2026-02-23 19:47:24.056018 | orchestrator | + access_network = false 2026-02-23 19:47:24.056021 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.056025 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.056028 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.056032 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.056036 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.056039 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056043 | orchestrator | } 2026-02-23 19:47:24.056047 | orchestrator | } 2026-02-23 19:47:24.056050 | orchestrator | 2026-02-23 19:47:24.056054 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-23 19:47:24.056057 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-23 19:47:24.056061 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.056065 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.056069 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.056072 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.056079 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.056083 | orchestrator | + config_drive = true 2026-02-23 19:47:24.056086 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.056090 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.056094 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-23 19:47:24.056097 | orchestrator | + force_delete = false 2026-02-23 19:47:24.056101 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.056105 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056108 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.056115 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.056119 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.056122 | orchestrator | + name = "testbed-node-2" 2026-02-23 19:47:24.056126 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.056129 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056133 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.056137 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.056140 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.056144 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-23 19:47:24.056148 | orchestrator | 2026-02-23 19:47:24.056151 | orchestrator | + block_device { 2026-02-23 19:47:24.056155 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.056158 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.056162 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.056168 | orchestrator | + multiattach = false 2026-02-23 19:47:24.056172 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.056175 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056179 | orchestrator | } 2026-02-23 19:47:24.056182 | orchestrator | 2026-02-23 19:47:24.056186 | orchestrator | + network { 2026-02-23 19:47:24.056190 | orchestrator | + access_network = false 2026-02-23 19:47:24.056193 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.056197 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.056200 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.056204 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.056208 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.056211 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056215 | orchestrator | } 2026-02-23 19:47:24.056218 | orchestrator | } 2026-02-23 19:47:24.056222 | orchestrator | 2026-02-23 19:47:24.056226 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-23 19:47:24.056229 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-23 19:47:24.056233 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.056237 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.056240 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.056244 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.056247 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.056251 | orchestrator | + config_drive = true 2026-02-23 19:47:24.056255 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.056258 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.056262 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-23 19:47:24.056265 | orchestrator | + force_delete = false 2026-02-23 19:47:24.056269 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.056273 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056276 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.056280 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.056283 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.056287 | orchestrator | + name = "testbed-node-3" 2026-02-23 19:47:24.056291 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.056294 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056298 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.056301 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.056305 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.056309 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-23 19:47:24.056312 | orchestrator | 2026-02-23 19:47:24.056316 | orchestrator | + block_device { 2026-02-23 19:47:24.056322 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.056326 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.056330 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.056337 | orchestrator | + multiattach = false 2026-02-23 19:47:24.056341 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.056344 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056348 | orchestrator | } 2026-02-23 19:47:24.056352 | orchestrator | 2026-02-23 19:47:24.056355 | orchestrator | + network { 2026-02-23 19:47:24.056359 | orchestrator | + access_network = false 2026-02-23 19:47:24.056362 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.056366 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.056370 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.056373 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.056377 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.056402 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056407 | orchestrator | } 2026-02-23 19:47:24.056411 | orchestrator | } 2026-02-23 19:47:24.056415 | orchestrator | 2026-02-23 19:47:24.056419 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-23 19:47:24.056423 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-23 19:47:24.056427 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.056430 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.056434 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.056438 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.056442 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.056446 | orchestrator | + config_drive = true 2026-02-23 19:47:24.056450 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.056454 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.056458 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-23 19:47:24.056461 | orchestrator | + force_delete = false 2026-02-23 19:47:24.056465 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.056469 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056473 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.056477 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.056481 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.056485 | orchestrator | + name = "testbed-node-4" 2026-02-23 19:47:24.056488 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.056492 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056496 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.056500 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.056504 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.056508 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-23 19:47:24.056512 | orchestrator | 2026-02-23 19:47:24.056516 | orchestrator | + block_device { 2026-02-23 19:47:24.056520 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.056523 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.056527 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.056531 | orchestrator | + multiattach = false 2026-02-23 19:47:24.056535 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.056539 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056543 | orchestrator | } 2026-02-23 19:47:24.056547 | orchestrator | 2026-02-23 19:47:24.056551 | orchestrator | + network { 2026-02-23 19:47:24.056554 | orchestrator | + access_network = false 2026-02-23 19:47:24.056558 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.056562 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.056566 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.056570 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.056574 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.056581 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056585 | orchestrator | } 2026-02-23 19:47:24.056589 | orchestrator | } 2026-02-23 19:47:24.056597 | orchestrator | 2026-02-23 19:47:24.056601 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-23 19:47:24.056605 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-23 19:47:24.056609 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-23 19:47:24.056613 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-23 19:47:24.056617 | orchestrator | + all_metadata = (known after apply) 2026-02-23 19:47:24.056621 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.056625 | orchestrator | + availability_zone = "nova" 2026-02-23 19:47:24.056629 | orchestrator | + config_drive = true 2026-02-23 19:47:24.056632 | orchestrator | + created = (known after apply) 2026-02-23 19:47:24.056636 | orchestrator | + flavor_id = (known after apply) 2026-02-23 19:47:24.056640 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-23 19:47:24.056644 | orchestrator | + force_delete = false 2026-02-23 19:47:24.056652 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-23 19:47:24.056656 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056659 | orchestrator | + image_id = (known after apply) 2026-02-23 19:47:24.056663 | orchestrator | + image_name = (known after apply) 2026-02-23 19:47:24.056667 | orchestrator | + key_pair = "testbed" 2026-02-23 19:47:24.056671 | orchestrator | + name = "testbed-node-5" 2026-02-23 19:47:24.056675 | orchestrator | + power_state = "active" 2026-02-23 19:47:24.056679 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056683 | orchestrator | + security_groups = (known after apply) 2026-02-23 19:47:24.056686 | orchestrator | + stop_before_destroy = false 2026-02-23 19:47:24.056690 | orchestrator | + updated = (known after apply) 2026-02-23 19:47:24.056694 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-23 19:47:24.056698 | orchestrator | 2026-02-23 19:47:24.056702 | orchestrator | + block_device { 2026-02-23 19:47:24.056706 | orchestrator | + boot_index = 0 2026-02-23 19:47:24.056710 | orchestrator | + delete_on_termination = false 2026-02-23 19:47:24.056714 | orchestrator | + destination_type = "volume" 2026-02-23 19:47:24.056718 | orchestrator | + multiattach = false 2026-02-23 19:47:24.056721 | orchestrator | + source_type = "volume" 2026-02-23 19:47:24.056725 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056729 | orchestrator | } 2026-02-23 19:47:24.056733 | orchestrator | 2026-02-23 19:47:24.056737 | orchestrator | + network { 2026-02-23 19:47:24.056741 | orchestrator | + access_network = false 2026-02-23 19:47:24.056745 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-23 19:47:24.056748 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-23 19:47:24.056752 | orchestrator | + mac = (known after apply) 2026-02-23 19:47:24.056756 | orchestrator | + name = (known after apply) 2026-02-23 19:47:24.056760 | orchestrator | + port = (known after apply) 2026-02-23 19:47:24.056764 | orchestrator | + uuid = (known after apply) 2026-02-23 19:47:24.056768 | orchestrator | } 2026-02-23 19:47:24.056772 | orchestrator | } 2026-02-23 19:47:24.056776 | orchestrator | 2026-02-23 19:47:24.056780 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-23 19:47:24.056784 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-23 19:47:24.056787 | orchestrator | + fingerprint = (known after apply) 2026-02-23 19:47:24.056791 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056795 | orchestrator | + name = "testbed" 2026-02-23 19:47:24.056799 | orchestrator | + private_key = (sensitive value) 2026-02-23 19:47:24.056803 | orchestrator | + public_key = (known after apply) 2026-02-23 19:47:24.056807 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056811 | orchestrator | + user_id = (known after apply) 2026-02-23 19:47:24.056815 | orchestrator | } 2026-02-23 19:47:24.056818 | orchestrator | 2026-02-23 19:47:24.056822 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-23 19:47:24.056826 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.056834 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.056838 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056842 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.056846 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056849 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.056853 | orchestrator | } 2026-02-23 19:47:24.056857 | orchestrator | 2026-02-23 19:47:24.056861 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-23 19:47:24.056865 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.056869 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.056873 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056877 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.056880 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056884 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.056888 | orchestrator | } 2026-02-23 19:47:24.056892 | orchestrator | 2026-02-23 19:47:24.056896 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-23 19:47:24.056900 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.056904 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.056908 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056911 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.056915 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056919 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.056923 | orchestrator | } 2026-02-23 19:47:24.056927 | orchestrator | 2026-02-23 19:47:24.056931 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-23 19:47:24.056935 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.056939 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.056943 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056946 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.056950 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.056954 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.056958 | orchestrator | } 2026-02-23 19:47:24.056962 | orchestrator | 2026-02-23 19:47:24.056966 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-23 19:47:24.056970 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.056974 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.056978 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.056981 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.056998 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057004 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.057008 | orchestrator | } 2026-02-23 19:47:24.057012 | orchestrator | 2026-02-23 19:47:24.057016 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-23 19:47:24.057020 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.057024 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.057028 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057032 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.057035 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057039 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.057043 | orchestrator | } 2026-02-23 19:47:24.057047 | orchestrator | 2026-02-23 19:47:24.057051 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-23 19:47:24.057055 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.057059 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.057063 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057067 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.057070 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057079 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.057083 | orchestrator | } 2026-02-23 19:47:24.057087 | orchestrator | 2026-02-23 19:47:24.057091 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-23 19:47:24.057095 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.057099 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.057103 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057106 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.057110 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057114 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.057118 | orchestrator | } 2026-02-23 19:47:24.057123 | orchestrator | 2026-02-23 19:47:24.057129 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-23 19:47:24.057135 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-23 19:47:24.057140 | orchestrator | + device = (known after apply) 2026-02-23 19:47:24.057146 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057151 | orchestrator | + instance_id = (known after apply) 2026-02-23 19:47:24.057157 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057162 | orchestrator | + volume_id = (known after apply) 2026-02-23 19:47:24.057167 | orchestrator | } 2026-02-23 19:47:24.057173 | orchestrator | 2026-02-23 19:47:24.057178 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-23 19:47:24.057185 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-23 19:47:24.057190 | orchestrator | + fixed_ip = (known after apply) 2026-02-23 19:47:24.057196 | orchestrator | + floating_ip = (known after apply) 2026-02-23 19:47:24.057201 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057207 | orchestrator | + port_id = (known after apply) 2026-02-23 19:47:24.057213 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057220 | orchestrator | } 2026-02-23 19:47:24.057226 | orchestrator | 2026-02-23 19:47:24.057232 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-23 19:47:24.057239 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-23 19:47:24.057243 | orchestrator | + address = (known after apply) 2026-02-23 19:47:24.057247 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.057250 | orchestrator | + dns_domain = (known after apply) 2026-02-23 19:47:24.057254 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.057257 | orchestrator | + fixed_ip = (known after apply) 2026-02-23 19:47:24.057261 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057265 | orchestrator | + pool = "public" 2026-02-23 19:47:24.057268 | orchestrator | + port_id = (known after apply) 2026-02-23 19:47:24.057272 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057275 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.057279 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.057283 | orchestrator | } 2026-02-23 19:47:24.057286 | orchestrator | 2026-02-23 19:47:24.057290 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-23 19:47:24.057294 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-23 19:47:24.057297 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.057301 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.057304 | orchestrator | + availability_zone_hints = [ 2026-02-23 19:47:24.057308 | orchestrator | + "nova", 2026-02-23 19:47:24.057312 | orchestrator | ] 2026-02-23 19:47:24.057315 | orchestrator | + dns_domain = (known after apply) 2026-02-23 19:47:24.057319 | orchestrator | + external = (known after apply) 2026-02-23 19:47:24.057323 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057326 | orchestrator | + mtu = (known after apply) 2026-02-23 19:47:24.057330 | orchestrator | + name = "net-testbed-management" 2026-02-23 19:47:24.057333 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.057341 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.057345 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057349 | orchestrator | + shared = (known after apply) 2026-02-23 19:47:24.057352 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.057356 | orchestrator | + transparent_vlan = (known after apply) 2026-02-23 19:47:24.057360 | orchestrator | 2026-02-23 19:47:24.057363 | orchestrator | + segments (known after apply) 2026-02-23 19:47:24.057367 | orchestrator | } 2026-02-23 19:47:24.057371 | orchestrator | 2026-02-23 19:47:24.057374 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-23 19:47:24.057378 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-23 19:47:24.057393 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.057406 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.057412 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.057422 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.057427 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.057433 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.057439 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.057445 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.057455 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057463 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.057467 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.057470 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.057474 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.057478 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057481 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.057485 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.057489 | orchestrator | 2026-02-23 19:47:24.057492 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057496 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.057500 | orchestrator | } 2026-02-23 19:47:24.057503 | orchestrator | 2026-02-23 19:47:24.057507 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.057511 | orchestrator | 2026-02-23 19:47:24.057514 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.057518 | orchestrator | + ip_address = "192.168.16.5" 2026-02-23 19:47:24.057522 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.057525 | orchestrator | } 2026-02-23 19:47:24.057529 | orchestrator | } 2026-02-23 19:47:24.057533 | orchestrator | 2026-02-23 19:47:24.057536 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-23 19:47:24.057540 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-23 19:47:24.057544 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.057547 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.057551 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.057554 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.057558 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.057562 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.057565 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.057569 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.057573 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057576 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.057580 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.057583 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.057587 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.057591 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057598 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.057602 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.057605 | orchestrator | 2026-02-23 19:47:24.057609 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057613 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-23 19:47:24.057616 | orchestrator | } 2026-02-23 19:47:24.057620 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057624 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.057627 | orchestrator | } 2026-02-23 19:47:24.057631 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057635 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-23 19:47:24.057638 | orchestrator | } 2026-02-23 19:47:24.057642 | orchestrator | 2026-02-23 19:47:24.057646 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.057649 | orchestrator | 2026-02-23 19:47:24.057653 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.057656 | orchestrator | + ip_address = "192.168.16.10" 2026-02-23 19:47:24.057660 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.057664 | orchestrator | } 2026-02-23 19:47:24.057668 | orchestrator | } 2026-02-23 19:47:24.057671 | orchestrator | 2026-02-23 19:47:24.057675 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-23 19:47:24.057679 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-23 19:47:24.057682 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.057686 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.057690 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.057693 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.057697 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.057700 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.057704 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.057708 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.057711 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057715 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.057719 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.057722 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.057726 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.057730 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057733 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.057737 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.057741 | orchestrator | 2026-02-23 19:47:24.057744 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057748 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-23 19:47:24.057752 | orchestrator | } 2026-02-23 19:47:24.057755 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057759 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.057762 | orchestrator | } 2026-02-23 19:47:24.057766 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057770 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-23 19:47:24.057773 | orchestrator | } 2026-02-23 19:47:24.057777 | orchestrator | 2026-02-23 19:47:24.057781 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.057787 | orchestrator | 2026-02-23 19:47:24.057793 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.057799 | orchestrator | + ip_address = "192.168.16.11" 2026-02-23 19:47:24.057805 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.057810 | orchestrator | } 2026-02-23 19:47:24.057817 | orchestrator | } 2026-02-23 19:47:24.057823 | orchestrator | 2026-02-23 19:47:24.057828 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-23 19:47:24.057836 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-23 19:47:24.057839 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.057843 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.057847 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.057851 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.057858 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.057862 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.057865 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.057869 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.057875 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.057884 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.057890 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.057895 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.057902 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.057908 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.057914 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.057920 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.057926 | orchestrator | 2026-02-23 19:47:24.057931 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057937 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-23 19:47:24.057943 | orchestrator | } 2026-02-23 19:47:24.057947 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057951 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.057954 | orchestrator | } 2026-02-23 19:47:24.057958 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.057962 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-23 19:47:24.057965 | orchestrator | } 2026-02-23 19:47:24.057969 | orchestrator | 2026-02-23 19:47:24.057973 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.057976 | orchestrator | 2026-02-23 19:47:24.057980 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.057983 | orchestrator | + ip_address = "192.168.16.12" 2026-02-23 19:47:24.057987 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.057991 | orchestrator | } 2026-02-23 19:47:24.057994 | orchestrator | } 2026-02-23 19:47:24.057998 | orchestrator | 2026-02-23 19:47:24.058002 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-23 19:47:24.058007 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-23 19:47:24.058033 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.058043 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.058049 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.058055 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.058061 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.058068 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.058075 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.058081 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.058087 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.058093 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.058099 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.058104 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.058111 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.058117 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.058125 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.058131 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.058138 | orchestrator | 2026-02-23 19:47:24.058144 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066115 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-23 19:47:24.066122 | orchestrator | } 2026-02-23 19:47:24.066126 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066131 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.066134 | orchestrator | } 2026-02-23 19:47:24.066138 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066142 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-23 19:47:24.066146 | orchestrator | } 2026-02-23 19:47:24.066150 | orchestrator | 2026-02-23 19:47:24.066168 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.066172 | orchestrator | 2026-02-23 19:47:24.066176 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.066180 | orchestrator | + ip_address = "192.168.16.13" 2026-02-23 19:47:24.066184 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.066188 | orchestrator | } 2026-02-23 19:47:24.066191 | orchestrator | } 2026-02-23 19:47:24.066195 | orchestrator | 2026-02-23 19:47:24.066199 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-23 19:47:24.066204 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-23 19:47:24.066208 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.066213 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.066217 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.066221 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.066225 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.066228 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.066232 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.066235 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.066239 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066243 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.066246 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.066250 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.066254 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.066258 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066261 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.066265 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066271 | orchestrator | 2026-02-23 19:47:24.066274 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066279 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-23 19:47:24.066282 | orchestrator | } 2026-02-23 19:47:24.066286 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066290 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.066293 | orchestrator | } 2026-02-23 19:47:24.066297 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066301 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-23 19:47:24.066304 | orchestrator | } 2026-02-23 19:47:24.066308 | orchestrator | 2026-02-23 19:47:24.066312 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.066316 | orchestrator | 2026-02-23 19:47:24.066319 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.066323 | orchestrator | + ip_address = "192.168.16.14" 2026-02-23 19:47:24.066327 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.066331 | orchestrator | } 2026-02-23 19:47:24.066334 | orchestrator | } 2026-02-23 19:47:24.066338 | orchestrator | 2026-02-23 19:47:24.066342 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-23 19:47:24.066345 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-23 19:47:24.066349 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.066353 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-23 19:47:24.066356 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-23 19:47:24.066360 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.066364 | orchestrator | + device_id = (known after apply) 2026-02-23 19:47:24.066367 | orchestrator | + device_owner = (known after apply) 2026-02-23 19:47:24.066396 | orchestrator | + dns_assignment = (known after apply) 2026-02-23 19:47:24.066400 | orchestrator | + dns_name = (known after apply) 2026-02-23 19:47:24.066404 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066408 | orchestrator | + mac_address = (known after apply) 2026-02-23 19:47:24.066411 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.066415 | orchestrator | + port_security_enabled = (known after apply) 2026-02-23 19:47:24.066418 | orchestrator | + qos_policy_id = (known after apply) 2026-02-23 19:47:24.066425 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066429 | orchestrator | + security_group_ids = (known after apply) 2026-02-23 19:47:24.066433 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066436 | orchestrator | 2026-02-23 19:47:24.066440 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066443 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-23 19:47:24.066447 | orchestrator | } 2026-02-23 19:47:24.066451 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066454 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-23 19:47:24.066458 | orchestrator | } 2026-02-23 19:47:24.066462 | orchestrator | + allowed_address_pairs { 2026-02-23 19:47:24.066465 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-23 19:47:24.066469 | orchestrator | } 2026-02-23 19:47:24.066473 | orchestrator | 2026-02-23 19:47:24.066483 | orchestrator | + binding (known after apply) 2026-02-23 19:47:24.066487 | orchestrator | 2026-02-23 19:47:24.066490 | orchestrator | + fixed_ip { 2026-02-23 19:47:24.066494 | orchestrator | + ip_address = "192.168.16.15" 2026-02-23 19:47:24.066498 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.066501 | orchestrator | } 2026-02-23 19:47:24.066505 | orchestrator | } 2026-02-23 19:47:24.066508 | orchestrator | 2026-02-23 19:47:24.066513 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-23 19:47:24.066516 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-23 19:47:24.066520 | orchestrator | + force_destroy = false 2026-02-23 19:47:24.066524 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066527 | orchestrator | + port_id = (known after apply) 2026-02-23 19:47:24.066531 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066535 | orchestrator | + router_id = (known after apply) 2026-02-23 19:47:24.066538 | orchestrator | + subnet_id = (known after apply) 2026-02-23 19:47:24.066542 | orchestrator | } 2026-02-23 19:47:24.066545 | orchestrator | 2026-02-23 19:47:24.066549 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-23 19:47:24.066553 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-23 19:47:24.066557 | orchestrator | + admin_state_up = (known after apply) 2026-02-23 19:47:24.066560 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.066564 | orchestrator | + availability_zone_hints = [ 2026-02-23 19:47:24.066568 | orchestrator | + "nova", 2026-02-23 19:47:24.066571 | orchestrator | ] 2026-02-23 19:47:24.066575 | orchestrator | + distributed = (known after apply) 2026-02-23 19:47:24.066579 | orchestrator | + enable_snat = (known after apply) 2026-02-23 19:47:24.066582 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-23 19:47:24.066586 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-23 19:47:24.066590 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066593 | orchestrator | + name = "testbed" 2026-02-23 19:47:24.066597 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066601 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066604 | orchestrator | 2026-02-23 19:47:24.066608 | orchestrator | + external_fixed_ip (known after apply) 2026-02-23 19:47:24.066612 | orchestrator | } 2026-02-23 19:47:24.066615 | orchestrator | 2026-02-23 19:47:24.066619 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-23 19:47:24.066623 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-23 19:47:24.066627 | orchestrator | + description = "ssh" 2026-02-23 19:47:24.066630 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066634 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066637 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066641 | orchestrator | + port_range_max = 22 2026-02-23 19:47:24.066645 | orchestrator | + port_range_min = 22 2026-02-23 19:47:24.066648 | orchestrator | + protocol = "tcp" 2026-02-23 19:47:24.066652 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066659 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066662 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066666 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.066669 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066673 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066677 | orchestrator | } 2026-02-23 19:47:24.066680 | orchestrator | 2026-02-23 19:47:24.066684 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-23 19:47:24.066687 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-23 19:47:24.066691 | orchestrator | + description = "wireguard" 2026-02-23 19:47:24.066695 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066698 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066702 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066706 | orchestrator | + port_range_max = 51820 2026-02-23 19:47:24.066709 | orchestrator | + port_range_min = 51820 2026-02-23 19:47:24.066713 | orchestrator | + protocol = "udp" 2026-02-23 19:47:24.066716 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066720 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066724 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066727 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.066731 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066734 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066738 | orchestrator | } 2026-02-23 19:47:24.066742 | orchestrator | 2026-02-23 19:47:24.066745 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-23 19:47:24.066749 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-23 19:47:24.066753 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066756 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066760 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066764 | orchestrator | + protocol = "tcp" 2026-02-23 19:47:24.066771 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066775 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066779 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066782 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-23 19:47:24.066786 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066789 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066793 | orchestrator | } 2026-02-23 19:47:24.066797 | orchestrator | 2026-02-23 19:47:24.066800 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-23 19:47:24.066804 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-23 19:47:24.066808 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066811 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066815 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066818 | orchestrator | + protocol = "udp" 2026-02-23 19:47:24.066822 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066826 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066830 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066833 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-23 19:47:24.066837 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066840 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066844 | orchestrator | } 2026-02-23 19:47:24.066848 | orchestrator | 2026-02-23 19:47:24.066851 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-23 19:47:24.066858 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-23 19:47:24.066861 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066865 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066868 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066872 | orchestrator | + protocol = "icmp" 2026-02-23 19:47:24.066876 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066879 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066883 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066886 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.066890 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066893 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066897 | orchestrator | } 2026-02-23 19:47:24.066901 | orchestrator | 2026-02-23 19:47:24.066904 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-23 19:47:24.066908 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-23 19:47:24.066911 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066915 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066919 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066922 | orchestrator | + protocol = "tcp" 2026-02-23 19:47:24.066926 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066930 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066936 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066940 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.066943 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066947 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.066950 | orchestrator | } 2026-02-23 19:47:24.066954 | orchestrator | 2026-02-23 19:47:24.066958 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-23 19:47:24.066961 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-23 19:47:24.066965 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.066968 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.066972 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.066976 | orchestrator | + protocol = "udp" 2026-02-23 19:47:24.066979 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.066983 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.066986 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.066990 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.066994 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.066997 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.067001 | orchestrator | } 2026-02-23 19:47:24.067004 | orchestrator | 2026-02-23 19:47:24.067008 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-23 19:47:24.067011 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-23 19:47:24.067015 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.067021 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.067024 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067028 | orchestrator | + protocol = "icmp" 2026-02-23 19:47:24.067032 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.067035 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.067039 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.067042 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.067046 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.067050 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.067056 | orchestrator | } 2026-02-23 19:47:24.067060 | orchestrator | 2026-02-23 19:47:24.067063 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-23 19:47:24.067067 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-23 19:47:24.067070 | orchestrator | + description = "vrrp" 2026-02-23 19:47:24.067074 | orchestrator | + direction = "ingress" 2026-02-23 19:47:24.067078 | orchestrator | + ethertype = "IPv4" 2026-02-23 19:47:24.067081 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067085 | orchestrator | + protocol = "112" 2026-02-23 19:47:24.067091 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.067095 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-23 19:47:24.067099 | orchestrator | + remote_group_id = (known after apply) 2026-02-23 19:47:24.067102 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-23 19:47:24.067106 | orchestrator | + security_group_id = (known after apply) 2026-02-23 19:47:24.067109 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.067113 | orchestrator | } 2026-02-23 19:47:24.067117 | orchestrator | 2026-02-23 19:47:24.067120 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-23 19:47:24.067124 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-23 19:47:24.067128 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.067131 | orchestrator | + description = "management security group" 2026-02-23 19:47:24.067135 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067138 | orchestrator | + name = "testbed-management" 2026-02-23 19:47:24.067142 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.067146 | orchestrator | + stateful = (known after apply) 2026-02-23 19:47:24.067149 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.067153 | orchestrator | } 2026-02-23 19:47:24.067156 | orchestrator | 2026-02-23 19:47:24.067160 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-23 19:47:24.067164 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-23 19:47:24.067167 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.067171 | orchestrator | + description = "node security group" 2026-02-23 19:47:24.067174 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067178 | orchestrator | + name = "testbed-node" 2026-02-23 19:47:24.067182 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.067185 | orchestrator | + stateful = (known after apply) 2026-02-23 19:47:24.067189 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.067192 | orchestrator | } 2026-02-23 19:47:24.067196 | orchestrator | 2026-02-23 19:47:24.067200 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-23 19:47:24.067203 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-23 19:47:24.067207 | orchestrator | + all_tags = (known after apply) 2026-02-23 19:47:24.067211 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-23 19:47:24.067214 | orchestrator | + dns_nameservers = [ 2026-02-23 19:47:24.067218 | orchestrator | + "8.8.8.8", 2026-02-23 19:47:24.067222 | orchestrator | + "9.9.9.9", 2026-02-23 19:47:24.067225 | orchestrator | ] 2026-02-23 19:47:24.067229 | orchestrator | + enable_dhcp = true 2026-02-23 19:47:24.067233 | orchestrator | + gateway_ip = (known after apply) 2026-02-23 19:47:24.067237 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067240 | orchestrator | + ip_version = 4 2026-02-23 19:47:24.067244 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-23 19:47:24.067247 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-23 19:47:24.067251 | orchestrator | + name = "subnet-testbed-management" 2026-02-23 19:47:24.067255 | orchestrator | + network_id = (known after apply) 2026-02-23 19:47:24.067258 | orchestrator | + no_gateway = false 2026-02-23 19:47:24.067262 | orchestrator | + region = (known after apply) 2026-02-23 19:47:24.067266 | orchestrator | + service_types = (known after apply) 2026-02-23 19:47:24.067272 | orchestrator | + tenant_id = (known after apply) 2026-02-23 19:47:24.067276 | orchestrator | 2026-02-23 19:47:24.067280 | orchestrator | + allocation_pool { 2026-02-23 19:47:24.067283 | orchestrator | + end = "192.168.31.250" 2026-02-23 19:47:24.067287 | orchestrator | + start = "192.168.31.200" 2026-02-23 19:47:24.067290 | orchestrator | } 2026-02-23 19:47:24.067294 | orchestrator | } 2026-02-23 19:47:24.067298 | orchestrator | 2026-02-23 19:47:24.067301 | orchestrator | # terraform_data.image will be created 2026-02-23 19:47:24.067305 | orchestrator | + resource "terraform_data" "image" { 2026-02-23 19:47:24.067308 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067312 | orchestrator | + input = "Ubuntu 24.04" 2026-02-23 19:47:24.067316 | orchestrator | + output = (known after apply) 2026-02-23 19:47:24.067319 | orchestrator | } 2026-02-23 19:47:24.067323 | orchestrator | 2026-02-23 19:47:24.067326 | orchestrator | # terraform_data.image_node will be created 2026-02-23 19:47:24.067330 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-23 19:47:24.067334 | orchestrator | + id = (known after apply) 2026-02-23 19:47:24.067337 | orchestrator | + input = "Ubuntu 24.04" 2026-02-23 19:47:24.067341 | orchestrator | + output = (known after apply) 2026-02-23 19:47:24.067344 | orchestrator | } 2026-02-23 19:47:24.067348 | orchestrator | 2026-02-23 19:47:24.067352 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-23 19:47:24.067355 | orchestrator | 2026-02-23 19:47:24.067359 | orchestrator | Changes to Outputs: 2026-02-23 19:47:24.067363 | orchestrator | + manager_address = (sensitive value) 2026-02-23 19:47:24.067366 | orchestrator | + private_key = (sensitive value) 2026-02-23 19:47:24.346086 | orchestrator | terraform_data.image_node: Creating... 2026-02-23 19:47:24.346166 | orchestrator | terraform_data.image: Creating... 2026-02-23 19:47:24.346178 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=b0b3d6ec-95f2-8bb9-8497-2c3a2b53c40a] 2026-02-23 19:47:24.347059 | orchestrator | terraform_data.image: Creation complete after 0s [id=36c33220-8d07-41a2-4169-ed43c2d6ec31] 2026-02-23 19:47:24.396099 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-23 19:47:24.396169 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-23 19:47:24.396179 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-23 19:47:24.416317 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-23 19:47:24.416398 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-23 19:47:24.416408 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-23 19:47:24.426099 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-23 19:47:24.429833 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-23 19:47:24.437754 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-23 19:47:24.437797 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-23 19:47:24.994734 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-23 19:47:24.998581 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-23 19:47:25.617824 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=c72c0565-a370-43ba-ba71-9afa06cf8883] 2026-02-23 19:47:25.620637 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-23 19:47:25.691534 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-23 19:47:25.701657 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-23 19:47:25.756460 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-23 19:47:25.773371 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-23 19:47:25.777562 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=040663a188ec643bbddc4c74ae003d79cedd818e] 2026-02-23 19:47:25.788994 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-23 19:47:25.805062 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=271c533cd59e9ae0c89415fabb0f1a84ba808c71] 2026-02-23 19:47:25.822514 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-23 19:47:26.865912 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c9b23149-e7ad-4be8-bea4-e8e2358a0b0c] 2026-02-23 19:47:26.872041 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-23 19:47:28.166405 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=33628d78-ee4f-4c3b-aa76-e2d4933b92b0] 2026-02-23 19:47:28.643769 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-23 19:47:28.643838 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=ff5a095b-a008-4c91-9745-d5e81356257a] 2026-02-23 19:47:28.643852 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-23 19:47:28.643863 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=5aac1e3f-8db2-4358-9586-7110a9e5b654] 2026-02-23 19:47:28.643875 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=5253639c-fb89-4131-a977-1b9b70ff9a21] 2026-02-23 19:47:28.643885 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-23 19:47:28.643896 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-23 19:47:28.643907 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3a7ae571-7eb7-4840-83c2-d00e4c8c1163] 2026-02-23 19:47:28.643918 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-23 19:47:28.643928 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=bead3466-decd-4fa8-a04b-557b053b82da] 2026-02-23 19:47:28.643939 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-23 19:47:28.643950 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=11854c96-d8a1-4784-a235-d2862629dfe4] 2026-02-23 19:47:28.643961 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-23 19:47:28.643972 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=1bc90d63-4a85-4c90-b970-1ea304425c33] 2026-02-23 19:47:28.643983 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=7c30197a-895f-4957-9949-9f1150308fa9] 2026-02-23 19:47:30.252444 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=2e51f7a6-a420-442c-8760-66c71fe023e5] 2026-02-23 19:47:31.613245 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=d4e3a894-d2c6-47f3-8ba9-1b6214637a5a] 2026-02-23 19:47:31.616520 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=9e763b21-c0db-4257-a8de-dc54d7c7ac08] 2026-02-23 19:47:31.625941 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=eef1630a-c3ac-45bb-907d-b74eee84efee] 2026-02-23 19:47:31.661787 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=8040d0c1-88b9-4bf5-a9b8-5090efbb82ba] 2026-02-23 19:47:31.689242 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=51cc313f-67b3-4692-9983-b1d477fcfc79] 2026-02-23 19:47:31.718475 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=1df12093-382b-4d3e-affc-8b6c9f04cec0] 2026-02-23 19:47:33.356015 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=679c5e2e-d765-4758-9c8f-fc9012e8768b] 2026-02-23 19:47:33.361674 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-23 19:47:33.364447 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-23 19:47:33.365705 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-23 19:47:33.624583 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=7d145be9-8ef7-49c2-8631-d791c2717e9d] 2026-02-23 19:47:33.637109 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-23 19:47:33.638983 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-23 19:47:33.640544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-23 19:47:33.646547 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-23 19:47:33.646960 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-23 19:47:33.647870 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-23 19:47:33.648371 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=4bdff3d8-dacd-4a3f-9604-9d561a85a6ba] 2026-02-23 19:47:33.655010 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-23 19:47:33.655077 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-23 19:47:33.655560 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-23 19:47:33.868267 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=8ea89f11-cf77-4ec2-93f2-6f0477ecb7f2] 2026-02-23 19:47:33.873446 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-23 19:47:33.890568 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=c1475092-4d77-4d43-8e31-69a6ed458d83] 2026-02-23 19:47:33.900683 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-23 19:47:34.032804 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=78baf3c8-ed77-46f5-8169-8450133ede01] 2026-02-23 19:47:34.042430 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-23 19:47:34.233296 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=7680612d-c21a-4c3d-8ef8-3ed08fd831e2] 2026-02-23 19:47:34.244923 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-23 19:47:34.331438 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=6e0d3c55-9b24-4341-a2a6-580c7ab1857d] 2026-02-23 19:47:34.343116 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-23 19:47:34.544434 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9add7b63-c632-41ea-8774-3873c0a3e1ec] 2026-02-23 19:47:34.556476 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-23 19:47:34.724949 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=0a79f940-1dae-4d18-b034-98a4176ac3ed] 2026-02-23 19:47:34.735972 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-23 19:47:35.131218 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=a45f1c9e-d364-4ab2-827e-d559e71815ca] 2026-02-23 19:47:35.182999 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=f9ca4ad9-7ccf-4af2-802a-09b65db302ed] 2026-02-23 19:47:35.318895 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=346ad253-dd0b-437f-82ac-9875717212db] 2026-02-23 19:47:35.549418 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=96972253-9e53-4977-8a59-bbfdfa5aaea5] 2026-02-23 19:47:35.624442 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=4ad9e5c8-0422-4eb4-a7b0-9db75bdd313c] 2026-02-23 19:47:35.770210 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=2bf5412f-1965-415f-b62c-26ca73a86068] 2026-02-23 19:47:36.008861 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=25fccad9-e381-49a2-8694-adf4bc97a2a7] 2026-02-23 19:47:36.039821 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=09aba50d-d76f-407d-b8ed-f20cf479155e] 2026-02-23 19:47:36.069184 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=8b1c8f26-ef6c-4569-9b39-148effeaf5ea] 2026-02-23 19:47:37.542081 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=e19c3d70-534d-41f1-b8f7-7408e1c8e39b] 2026-02-23 19:47:37.560674 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-23 19:47:37.575686 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-23 19:47:37.580823 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-23 19:47:37.584991 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-23 19:47:37.593727 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-23 19:47:37.597332 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-23 19:47:37.600502 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-23 19:47:39.834404 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=5f37deb6-5ced-40a5-a390-2744103f8f09] 2026-02-23 19:47:39.842463 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-23 19:47:39.847584 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-23 19:47:39.849887 | orchestrator | local_file.inventory: Creating... 2026-02-23 19:47:39.854745 | orchestrator | local_file.inventory: Creation complete after 0s [id=e570d94f80bdecc54cb82cc49cb3fdfa43b0a21d] 2026-02-23 19:47:39.854888 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f8482e1004e4cb5cd41615ef420bfbc08bdc0b41] 2026-02-23 19:47:42.568455 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 3s [id=5f37deb6-5ced-40a5-a390-2744103f8f09] 2026-02-23 19:47:47.584118 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-23 19:47:47.584251 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-23 19:47:47.588722 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-23 19:47:47.593946 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-23 19:47:47.598207 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-23 19:47:47.601686 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-23 19:47:57.592800 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-23 19:47:57.592911 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-23 19:47:57.592926 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-23 19:47:57.595128 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-23 19:47:57.598444 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-23 19:47:57.602640 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-23 19:48:07.601352 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-23 19:48:07.601496 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-23 19:48:07.601517 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-23 19:48:07.601534 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-23 19:48:07.601552 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-23 19:48:07.603718 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-23 19:48:08.634145 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=52680e49-3793-49da-95dd-81ffb4baef3f] 2026-02-23 19:48:08.946755 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=4b5b7771-32a2-4d97-a37c-c83f7f2ca89d] 2026-02-23 19:48:17.609543 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-23 19:48:17.609753 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-23 19:48:17.609801 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-23 19:48:17.609897 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-02-23 19:48:18.550772 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=b0dd0899-8c7d-4be9-b6a4-bb55becfe38a] 2026-02-23 19:48:18.687486 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=369af650-aad5-4abb-a8e7-f8296dd6a56a] 2026-02-23 19:48:18.763199 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=484a776d-c70d-4c8e-8b02-7dcd48a1498b] 2026-02-23 19:48:19.142265 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=1c69f87e-95fc-4303-9181-a15d9e9a459e] 2026-02-23 19:48:19.158929 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-23 19:48:19.164756 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1791372339266605066] 2026-02-23 19:48:19.168209 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-23 19:48:19.168764 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-23 19:48:19.170193 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-23 19:48:19.179231 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-23 19:48:19.185615 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-23 19:48:19.186163 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-23 19:48:19.191280 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-23 19:48:19.200025 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-23 19:48:19.201575 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-23 19:48:19.210217 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-23 19:48:22.877014 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=b0dd0899-8c7d-4be9-b6a4-bb55becfe38a/33628d78-ee4f-4c3b-aa76-e2d4933b92b0] 2026-02-23 19:48:22.877110 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=1c69f87e-95fc-4303-9181-a15d9e9a459e/ff5a095b-a008-4c91-9745-d5e81356257a] 2026-02-23 19:48:22.900732 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=484a776d-c70d-4c8e-8b02-7dcd48a1498b/7c30197a-895f-4957-9949-9f1150308fa9] 2026-02-23 19:48:22.913260 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=b0dd0899-8c7d-4be9-b6a4-bb55becfe38a/1bc90d63-4a85-4c90-b970-1ea304425c33] 2026-02-23 19:48:22.930983 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=1c69f87e-95fc-4303-9181-a15d9e9a459e/5253639c-fb89-4131-a977-1b9b70ff9a21] 2026-02-23 19:48:22.940034 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=484a776d-c70d-4c8e-8b02-7dcd48a1498b/11854c96-d8a1-4784-a235-d2862629dfe4] 2026-02-23 19:48:29.018443 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=b0dd0899-8c7d-4be9-b6a4-bb55becfe38a/3a7ae571-7eb7-4840-83c2-d00e4c8c1163] 2026-02-23 19:48:29.039719 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=484a776d-c70d-4c8e-8b02-7dcd48a1498b/bead3466-decd-4fa8-a04b-557b053b82da] 2026-02-23 19:48:29.047685 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=1c69f87e-95fc-4303-9181-a15d9e9a459e/5aac1e3f-8db2-4358-9586-7110a9e5b654] 2026-02-23 19:48:29.201640 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-23 19:48:39.206751 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-23 19:48:39.661476 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=18142587-37ef-4b01-8860-805732932306] 2026-02-23 19:48:39.672688 | orchestrator | 2026-02-23 19:48:39.672811 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-23 19:48:39.672840 | orchestrator | 2026-02-23 19:48:39.672865 | orchestrator | Outputs: 2026-02-23 19:48:39.672873 | orchestrator | 2026-02-23 19:48:39.672878 | orchestrator | manager_address = 2026-02-23 19:48:39.672884 | orchestrator | private_key = 2026-02-23 19:48:39.759613 | orchestrator | ok: Runtime: 0:01:21.911794 2026-02-23 19:48:39.805745 | 2026-02-23 19:48:39.805910 | TASK [Create infrastructure (stable)] 2026-02-23 19:48:40.342823 | orchestrator | skipping: Conditional result was False 2026-02-23 19:48:40.352683 | 2026-02-23 19:48:40.352806 | TASK [Fetch manager address] 2026-02-23 19:48:40.824963 | orchestrator | ok 2026-02-23 19:48:40.832885 | 2026-02-23 19:48:40.833008 | TASK [Set manager_host address] 2026-02-23 19:48:40.902515 | orchestrator | ok 2026-02-23 19:48:40.913569 | 2026-02-23 19:48:40.913700 | LOOP [Update ansible collections] 2026-02-23 19:48:42.063843 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-23 19:48:42.064217 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-23 19:48:42.064271 | orchestrator | Starting galaxy collection install process 2026-02-23 19:48:42.064298 | orchestrator | Process install dependency map 2026-02-23 19:48:42.064323 | orchestrator | Starting collection install process 2026-02-23 19:48:42.064345 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-02-23 19:48:42.064372 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-02-23 19:48:42.064399 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-23 19:48:42.064451 | orchestrator | ok: Item: commons Runtime: 0:00:00.791723 2026-02-23 19:48:43.195195 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-23 19:48:43.195318 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-23 19:48:43.195349 | orchestrator | Starting galaxy collection install process 2026-02-23 19:48:43.195373 | orchestrator | Process install dependency map 2026-02-23 19:48:43.195396 | orchestrator | Starting collection install process 2026-02-23 19:48:43.195416 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-02-23 19:48:43.195448 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-02-23 19:48:43.195468 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-23 19:48:43.195501 | orchestrator | ok: Item: services Runtime: 0:00:00.786283 2026-02-23 19:48:43.211488 | 2026-02-23 19:48:43.211613 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-23 19:48:53.839006 | orchestrator | ok 2026-02-23 19:48:53.851118 | 2026-02-23 19:48:53.851242 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-23 19:49:53.900686 | orchestrator | ok 2026-02-23 19:49:53.912080 | 2026-02-23 19:49:53.912223 | TASK [Fetch manager ssh hostkey] 2026-02-23 19:49:55.489444 | orchestrator | Output suppressed because no_log was given 2026-02-23 19:49:55.505371 | 2026-02-23 19:49:55.505560 | TASK [Get ssh keypair from terraform environment] 2026-02-23 19:49:56.045510 | orchestrator | ok: Runtime: 0:00:00.007483 2026-02-23 19:49:56.065104 | 2026-02-23 19:49:56.065277 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-23 19:49:56.113842 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-23 19:49:56.124637 | 2026-02-23 19:49:56.124771 | TASK [Run manager part 0] 2026-02-23 19:49:57.203190 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-23 19:49:57.265642 | orchestrator | 2026-02-23 19:49:57.265695 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-23 19:49:57.265705 | orchestrator | 2026-02-23 19:49:57.265722 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-23 19:49:59.172270 | orchestrator | ok: [testbed-manager] 2026-02-23 19:49:59.172327 | orchestrator | 2026-02-23 19:49:59.172349 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-23 19:49:59.172358 | orchestrator | 2026-02-23 19:49:59.172367 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 19:50:01.004891 | orchestrator | ok: [testbed-manager] 2026-02-23 19:50:01.004941 | orchestrator | 2026-02-23 19:50:01.004952 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-23 19:50:01.676499 | orchestrator | ok: [testbed-manager] 2026-02-23 19:50:01.676558 | orchestrator | 2026-02-23 19:50:01.676571 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-23 19:50:01.721532 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.721582 | orchestrator | 2026-02-23 19:50:01.721594 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-23 19:50:01.753209 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.753253 | orchestrator | 2026-02-23 19:50:01.753264 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-23 19:50:01.787358 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.787406 | orchestrator | 2026-02-23 19:50:01.787415 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-23 19:50:01.821526 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.821570 | orchestrator | 2026-02-23 19:50:01.821578 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-23 19:50:01.860578 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.860622 | orchestrator | 2026-02-23 19:50:01.860632 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-23 19:50:01.896497 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.896538 | orchestrator | 2026-02-23 19:50:01.896548 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-23 19:50:01.926942 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:50:01.926977 | orchestrator | 2026-02-23 19:50:01.926986 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-23 19:50:02.611606 | orchestrator | changed: [testbed-manager] 2026-02-23 19:50:02.611647 | orchestrator | 2026-02-23 19:50:02.611656 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-23 19:52:47.264194 | orchestrator | changed: [testbed-manager] 2026-02-23 19:52:47.264393 | orchestrator | 2026-02-23 19:52:47.264425 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-23 19:54:08.737569 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:08.737667 | orchestrator | 2026-02-23 19:54:08.737683 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-23 19:54:28.828838 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:28.828917 | orchestrator | 2026-02-23 19:54:28.828927 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-23 19:54:37.446218 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:37.446276 | orchestrator | 2026-02-23 19:54:37.446284 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-23 19:54:37.494226 | orchestrator | ok: [testbed-manager] 2026-02-23 19:54:37.494289 | orchestrator | 2026-02-23 19:54:37.494307 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-23 19:54:38.317527 | orchestrator | ok: [testbed-manager] 2026-02-23 19:54:38.317566 | orchestrator | 2026-02-23 19:54:38.317576 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-23 19:54:39.056181 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:39.056267 | orchestrator | 2026-02-23 19:54:39.056285 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-23 19:54:45.141451 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:45.141579 | orchestrator | 2026-02-23 19:54:45.141621 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-23 19:54:51.026544 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:51.026628 | orchestrator | 2026-02-23 19:54:51.026647 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-23 19:54:53.709418 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:53.709504 | orchestrator | 2026-02-23 19:54:53.709520 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-23 19:54:55.476757 | orchestrator | changed: [testbed-manager] 2026-02-23 19:54:55.476884 | orchestrator | 2026-02-23 19:54:55.476902 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-23 19:54:56.606091 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-23 19:54:56.606191 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-23 19:54:56.606215 | orchestrator | 2026-02-23 19:54:56.606237 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-23 19:54:56.651093 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-23 19:54:56.651148 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-23 19:54:56.651154 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-23 19:54:56.651159 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-23 19:55:01.967964 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-23 19:55:01.968003 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-23 19:55:01.968011 | orchestrator | 2026-02-23 19:55:01.968017 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-23 19:55:02.532325 | orchestrator | changed: [testbed-manager] 2026-02-23 19:55:02.532374 | orchestrator | 2026-02-23 19:55:02.532383 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-23 19:56:22.328304 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-23 19:56:22.328470 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-23 19:56:22.328480 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-23 19:56:22.328487 | orchestrator | 2026-02-23 19:56:22.328494 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-23 19:56:24.590603 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-23 19:56:24.591313 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-23 19:56:24.591346 | orchestrator | 2026-02-23 19:56:24.591359 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-23 19:56:24.591372 | orchestrator | 2026-02-23 19:56:24.591384 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 19:56:26.034765 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:26.034856 | orchestrator | 2026-02-23 19:56:26.034875 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-23 19:56:26.089076 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:26.089156 | orchestrator | 2026-02-23 19:56:26.089168 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-23 19:56:26.161713 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:26.161777 | orchestrator | 2026-02-23 19:56:26.161787 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-23 19:56:26.952783 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:26.952893 | orchestrator | 2026-02-23 19:56:26.952920 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-23 19:56:27.660408 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:27.660483 | orchestrator | 2026-02-23 19:56:27.660505 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-23 19:56:29.080761 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-23 19:56:29.080852 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-23 19:56:29.080868 | orchestrator | 2026-02-23 19:56:29.080899 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-23 19:56:30.474937 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:30.475053 | orchestrator | 2026-02-23 19:56:30.475069 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-23 19:56:32.256281 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 19:56:32.256336 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-23 19:56:32.256348 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-23 19:56:32.256359 | orchestrator | 2026-02-23 19:56:32.256371 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-23 19:56:32.334452 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:32.334532 | orchestrator | 2026-02-23 19:56:32.334548 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-23 19:56:32.415805 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:32.415855 | orchestrator | 2026-02-23 19:56:32.415863 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-23 19:56:32.973157 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:32.973244 | orchestrator | 2026-02-23 19:56:32.973265 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-23 19:56:33.047011 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:33.047054 | orchestrator | 2026-02-23 19:56:33.047062 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-23 19:56:33.873528 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-23 19:56:33.873755 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:33.873787 | orchestrator | 2026-02-23 19:56:33.873803 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-23 19:56:33.911450 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:33.911513 | orchestrator | 2026-02-23 19:56:33.911522 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-23 19:56:33.939971 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:33.940028 | orchestrator | 2026-02-23 19:56:33.940035 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-23 19:56:33.973446 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:33.973532 | orchestrator | 2026-02-23 19:56:33.973550 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-23 19:56:34.047356 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:34.047429 | orchestrator | 2026-02-23 19:56:34.047442 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-23 19:56:34.756481 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:34.756526 | orchestrator | 2026-02-23 19:56:34.756533 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-23 19:56:34.756538 | orchestrator | 2026-02-23 19:56:34.756542 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 19:56:36.145623 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:36.145738 | orchestrator | 2026-02-23 19:56:36.145757 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-23 19:56:37.086481 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:37.086565 | orchestrator | 2026-02-23 19:56:37.086582 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 19:56:37.086595 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-23 19:56:37.086607 | orchestrator | 2026-02-23 19:56:37.403025 | orchestrator | ok: Runtime: 0:06:40.699232 2026-02-23 19:56:37.420914 | 2026-02-23 19:56:37.421062 | TASK [Point out that the log in on the manager is now possible] 2026-02-23 19:56:37.454565 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-23 19:56:37.463035 | 2026-02-23 19:56:37.463147 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-23 19:56:37.508807 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-23 19:56:37.519013 | 2026-02-23 19:56:37.519138 | TASK [Run manager part 1 + 2] 2026-02-23 19:56:38.692027 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-23 19:56:38.831500 | orchestrator | 2026-02-23 19:56:38.831582 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-23 19:56:38.831598 | orchestrator | 2026-02-23 19:56:38.831624 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 19:56:41.800876 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:41.800932 | orchestrator | 2026-02-23 19:56:41.800954 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-23 19:56:41.840833 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:41.840879 | orchestrator | 2026-02-23 19:56:41.840887 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-23 19:56:41.897418 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:41.897463 | orchestrator | 2026-02-23 19:56:41.897470 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-23 19:56:41.938508 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:41.938549 | orchestrator | 2026-02-23 19:56:41.938557 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-23 19:56:42.000351 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:42.000396 | orchestrator | 2026-02-23 19:56:42.000404 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-23 19:56:42.057342 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:42.057388 | orchestrator | 2026-02-23 19:56:42.057396 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-23 19:56:42.113837 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-23 19:56:42.113882 | orchestrator | 2026-02-23 19:56:42.113888 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-23 19:56:42.800849 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:42.800905 | orchestrator | 2026-02-23 19:56:42.800915 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-23 19:56:42.852975 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:56:42.853022 | orchestrator | 2026-02-23 19:56:42.853204 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-23 19:56:44.213776 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:44.213837 | orchestrator | 2026-02-23 19:56:44.213847 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-23 19:56:44.807506 | orchestrator | ok: [testbed-manager] 2026-02-23 19:56:44.807591 | orchestrator | 2026-02-23 19:56:44.807607 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-23 19:56:45.945262 | orchestrator | changed: [testbed-manager] 2026-02-23 19:56:45.945333 | orchestrator | 2026-02-23 19:56:45.945343 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-23 19:57:00.897083 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:00.897163 | orchestrator | 2026-02-23 19:57:00.897178 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-23 19:57:01.571544 | orchestrator | ok: [testbed-manager] 2026-02-23 19:57:01.571818 | orchestrator | 2026-02-23 19:57:01.571853 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-23 19:57:01.627691 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:57:01.627765 | orchestrator | 2026-02-23 19:57:01.627780 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-23 19:57:02.590406 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:02.590516 | orchestrator | 2026-02-23 19:57:02.590544 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-23 19:57:03.547239 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:03.547337 | orchestrator | 2026-02-23 19:57:03.547354 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-23 19:57:04.135253 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:04.135361 | orchestrator | 2026-02-23 19:57:04.135380 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-23 19:57:04.177060 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-23 19:57:04.177197 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-23 19:57:04.177225 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-23 19:57:04.177246 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-23 19:57:06.589789 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:06.590098 | orchestrator | 2026-02-23 19:57:06.590135 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-23 19:57:15.290975 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-23 19:57:15.291034 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-23 19:57:15.291042 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-23 19:57:15.291049 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-23 19:57:15.291059 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-23 19:57:15.291065 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-23 19:57:15.291070 | orchestrator | 2026-02-23 19:57:15.291077 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-23 19:57:16.345841 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:16.345930 | orchestrator | 2026-02-23 19:57:16.345944 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-23 19:57:16.392854 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:57:16.392958 | orchestrator | 2026-02-23 19:57:16.392977 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-23 19:57:19.419170 | orchestrator | changed: [testbed-manager] 2026-02-23 19:57:19.419258 | orchestrator | 2026-02-23 19:57:19.419274 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-23 19:57:19.466500 | orchestrator | skipping: [testbed-manager] 2026-02-23 19:57:19.466540 | orchestrator | 2026-02-23 19:57:19.466548 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-23 19:58:52.215218 | orchestrator | changed: [testbed-manager] 2026-02-23 19:58:52.215310 | orchestrator | 2026-02-23 19:58:52.215329 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-23 19:58:53.351285 | orchestrator | ok: [testbed-manager] 2026-02-23 19:58:53.351344 | orchestrator | 2026-02-23 19:58:53.351358 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 19:58:53.351369 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-23 19:58:53.351379 | orchestrator | 2026-02-23 19:58:53.654906 | orchestrator | ok: Runtime: 0:02:15.608718 2026-02-23 19:58:53.672879 | 2026-02-23 19:58:53.673010 | TASK [Reboot manager] 2026-02-23 19:58:55.208231 | orchestrator | ok: Runtime: 0:00:00.972155 2026-02-23 19:58:55.226972 | 2026-02-23 19:58:55.227151 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-23 19:59:09.759614 | orchestrator | ok 2026-02-23 19:59:09.770912 | 2026-02-23 19:59:09.771041 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-23 20:00:09.815974 | orchestrator | ok 2026-02-23 20:00:09.826644 | 2026-02-23 20:00:09.826787 | TASK [Deploy manager + bootstrap nodes] 2026-02-23 20:00:12.245912 | orchestrator | 2026-02-23 20:00:12.246054 | orchestrator | # DEPLOY MANAGER 2026-02-23 20:00:12.246066 | orchestrator | 2026-02-23 20:00:12.246190 | orchestrator | + set -e 2026-02-23 20:00:12.246203 | orchestrator | + echo 2026-02-23 20:00:12.246212 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-23 20:00:12.246223 | orchestrator | + echo 2026-02-23 20:00:12.246249 | orchestrator | + cat /opt/manager-vars.sh 2026-02-23 20:00:12.249321 | orchestrator | export NUMBER_OF_NODES=6 2026-02-23 20:00:12.249350 | orchestrator | 2026-02-23 20:00:12.249355 | orchestrator | export CEPH_VERSION=reef 2026-02-23 20:00:12.249361 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-23 20:00:12.249367 | orchestrator | export MANAGER_VERSION=latest 2026-02-23 20:00:12.249379 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-23 20:00:12.249383 | orchestrator | 2026-02-23 20:00:12.249392 | orchestrator | export ARA=false 2026-02-23 20:00:12.249397 | orchestrator | export DEPLOY_MODE=manager 2026-02-23 20:00:12.249405 | orchestrator | export TEMPEST=false 2026-02-23 20:00:12.249409 | orchestrator | export IS_ZUUL=true 2026-02-23 20:00:12.249413 | orchestrator | 2026-02-23 20:00:12.249421 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:00:12.249442 | orchestrator | export EXTERNAL_API=false 2026-02-23 20:00:12.249446 | orchestrator | 2026-02-23 20:00:12.249450 | orchestrator | export IMAGE_USER=ubuntu 2026-02-23 20:00:12.249458 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-23 20:00:12.249462 | orchestrator | 2026-02-23 20:00:12.249466 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-23 20:00:12.249566 | orchestrator | 2026-02-23 20:00:12.249581 | orchestrator | + echo 2026-02-23 20:00:12.249587 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-23 20:00:12.250583 | orchestrator | ++ export INTERACTIVE=false 2026-02-23 20:00:12.250606 | orchestrator | ++ INTERACTIVE=false 2026-02-23 20:00:12.250612 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-23 20:00:12.250618 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-23 20:00:12.250671 | orchestrator | + source /opt/manager-vars.sh 2026-02-23 20:00:12.250713 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-23 20:00:12.250720 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-23 20:00:12.250725 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-23 20:00:12.250732 | orchestrator | ++ CEPH_VERSION=reef 2026-02-23 20:00:12.250990 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-23 20:00:12.251012 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-23 20:00:12.251019 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 20:00:12.251026 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 20:00:12.251034 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-23 20:00:12.251045 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-23 20:00:12.251050 | orchestrator | ++ export ARA=false 2026-02-23 20:00:12.251105 | orchestrator | ++ ARA=false 2026-02-23 20:00:12.251112 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-23 20:00:12.251116 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-23 20:00:12.251120 | orchestrator | ++ export TEMPEST=false 2026-02-23 20:00:12.251124 | orchestrator | ++ TEMPEST=false 2026-02-23 20:00:12.251129 | orchestrator | ++ export IS_ZUUL=true 2026-02-23 20:00:12.251133 | orchestrator | ++ IS_ZUUL=true 2026-02-23 20:00:12.251137 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:00:12.251141 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:00:12.251145 | orchestrator | ++ export EXTERNAL_API=false 2026-02-23 20:00:12.251149 | orchestrator | ++ EXTERNAL_API=false 2026-02-23 20:00:12.251154 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-23 20:00:12.251158 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-23 20:00:12.251162 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-23 20:00:12.251166 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-23 20:00:12.251170 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-23 20:00:12.251174 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-23 20:00:12.251178 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-23 20:00:12.303917 | orchestrator | + docker version 2026-02-23 20:00:12.400967 | orchestrator | Client: Docker Engine - Community 2026-02-23 20:00:12.401060 | orchestrator | Version: 27.5.1 2026-02-23 20:00:12.401078 | orchestrator | API version: 1.47 2026-02-23 20:00:12.401090 | orchestrator | Go version: go1.22.11 2026-02-23 20:00:12.401101 | orchestrator | Git commit: 9f9e405 2026-02-23 20:00:12.401111 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-23 20:00:12.401124 | orchestrator | OS/Arch: linux/amd64 2026-02-23 20:00:12.401135 | orchestrator | Context: default 2026-02-23 20:00:12.401146 | orchestrator | 2026-02-23 20:00:12.401157 | orchestrator | Server: Docker Engine - Community 2026-02-23 20:00:12.401167 | orchestrator | Engine: 2026-02-23 20:00:12.401179 | orchestrator | Version: 27.5.1 2026-02-23 20:00:12.401190 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-23 20:00:12.401227 | orchestrator | Go version: go1.22.11 2026-02-23 20:00:12.401238 | orchestrator | Git commit: 4c9b3b0 2026-02-23 20:00:12.401249 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-23 20:00:12.401260 | orchestrator | OS/Arch: linux/amd64 2026-02-23 20:00:12.401271 | orchestrator | Experimental: false 2026-02-23 20:00:12.401282 | orchestrator | containerd: 2026-02-23 20:00:12.401294 | orchestrator | Version: v2.2.1 2026-02-23 20:00:12.401305 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-23 20:00:12.401316 | orchestrator | runc: 2026-02-23 20:00:12.401328 | orchestrator | Version: 1.3.4 2026-02-23 20:00:12.401347 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-23 20:00:12.401367 | orchestrator | docker-init: 2026-02-23 20:00:12.401386 | orchestrator | Version: 0.19.0 2026-02-23 20:00:12.401401 | orchestrator | GitCommit: de40ad0 2026-02-23 20:00:12.403794 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-23 20:00:12.414984 | orchestrator | + set -e 2026-02-23 20:00:12.415029 | orchestrator | + source /opt/manager-vars.sh 2026-02-23 20:00:12.415042 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-23 20:00:12.415053 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-23 20:00:12.415064 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-23 20:00:12.415074 | orchestrator | ++ CEPH_VERSION=reef 2026-02-23 20:00:12.415085 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-23 20:00:12.415097 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-23 20:00:12.415108 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 20:00:12.415118 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 20:00:12.415145 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-23 20:00:12.415156 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-23 20:00:12.415167 | orchestrator | ++ export ARA=false 2026-02-23 20:00:12.415178 | orchestrator | ++ ARA=false 2026-02-23 20:00:12.415189 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-23 20:00:12.415209 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-23 20:00:12.415322 | orchestrator | ++ export TEMPEST=false 2026-02-23 20:00:12.415338 | orchestrator | ++ TEMPEST=false 2026-02-23 20:00:12.415349 | orchestrator | ++ export IS_ZUUL=true 2026-02-23 20:00:12.415360 | orchestrator | ++ IS_ZUUL=true 2026-02-23 20:00:12.415371 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:00:12.415382 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:00:12.415393 | orchestrator | ++ export EXTERNAL_API=false 2026-02-23 20:00:12.415478 | orchestrator | ++ EXTERNAL_API=false 2026-02-23 20:00:12.415495 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-23 20:00:12.415506 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-23 20:00:12.415528 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-23 20:00:12.415539 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-23 20:00:12.415550 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-23 20:00:12.415561 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-23 20:00:12.415572 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-23 20:00:12.415583 | orchestrator | ++ export INTERACTIVE=false 2026-02-23 20:00:12.415593 | orchestrator | ++ INTERACTIVE=false 2026-02-23 20:00:12.415604 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-23 20:00:12.415620 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-23 20:00:12.415713 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 20:00:12.415728 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 20:00:12.415739 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-02-23 20:00:12.422703 | orchestrator | + set -e 2026-02-23 20:00:12.422748 | orchestrator | + VERSION=reef 2026-02-23 20:00:12.423808 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-23 20:00:12.429714 | orchestrator | + [[ -n ceph_version: reef ]] 2026-02-23 20:00:12.429758 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-02-23 20:00:12.434808 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-02-23 20:00:12.440703 | orchestrator | + set -e 2026-02-23 20:00:12.441110 | orchestrator | + VERSION=2024.2 2026-02-23 20:00:12.441737 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-23 20:00:12.445617 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-02-23 20:00:12.445661 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-02-23 20:00:12.450398 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-23 20:00:12.451629 | orchestrator | ++ semver latest 7.0.0 2026-02-23 20:00:12.510939 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 20:00:12.511027 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 20:00:12.511042 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-23 20:00:12.511295 | orchestrator | ++ semver latest 10.0.0-0 2026-02-23 20:00:12.570375 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 20:00:12.571159 | orchestrator | ++ semver 2024.2 2025.1 2026-02-23 20:00:12.628745 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 20:00:12.628828 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-23 20:00:12.715614 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-23 20:00:12.717913 | orchestrator | + source /opt/venv/bin/activate 2026-02-23 20:00:12.719297 | orchestrator | ++ deactivate nondestructive 2026-02-23 20:00:12.719420 | orchestrator | ++ '[' -n '' ']' 2026-02-23 20:00:12.719513 | orchestrator | ++ '[' -n '' ']' 2026-02-23 20:00:12.719548 | orchestrator | ++ hash -r 2026-02-23 20:00:12.719812 | orchestrator | ++ '[' -n '' ']' 2026-02-23 20:00:12.719832 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-23 20:00:12.719844 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-23 20:00:12.719859 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-23 20:00:12.719872 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-23 20:00:12.719884 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-23 20:00:12.719895 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-23 20:00:12.719907 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-23 20:00:12.719919 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-23 20:00:12.719931 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-23 20:00:12.719942 | orchestrator | ++ export PATH 2026-02-23 20:00:12.719953 | orchestrator | ++ '[' -n '' ']' 2026-02-23 20:00:12.719965 | orchestrator | ++ '[' -z '' ']' 2026-02-23 20:00:12.719976 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-23 20:00:12.719987 | orchestrator | ++ PS1='(venv) ' 2026-02-23 20:00:12.719998 | orchestrator | ++ export PS1 2026-02-23 20:00:12.720009 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-23 20:00:12.720021 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-23 20:00:12.720033 | orchestrator | ++ hash -r 2026-02-23 20:00:12.720144 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-23 20:00:13.834874 | orchestrator | 2026-02-23 20:00:13.834961 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-23 20:00:13.834976 | orchestrator | 2026-02-23 20:00:13.834986 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-23 20:00:14.399690 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:14.399755 | orchestrator | 2026-02-23 20:00:14.399763 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-23 20:00:15.364527 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:15.364623 | orchestrator | 2026-02-23 20:00:15.364640 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-23 20:00:15.364652 | orchestrator | 2026-02-23 20:00:15.364664 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 20:00:17.595959 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:17.596075 | orchestrator | 2026-02-23 20:00:17.596113 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-23 20:00:17.650402 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:17.650566 | orchestrator | 2026-02-23 20:00:17.650590 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-23 20:00:18.095405 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:18.095589 | orchestrator | 2026-02-23 20:00:18.095619 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-23 20:00:18.129820 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:18.129911 | orchestrator | 2026-02-23 20:00:18.129926 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-23 20:00:18.451610 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:18.451696 | orchestrator | 2026-02-23 20:00:18.451712 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-23 20:00:18.773888 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:18.773981 | orchestrator | 2026-02-23 20:00:18.773998 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-23 20:00:18.885105 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:18.885197 | orchestrator | 2026-02-23 20:00:18.885212 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-23 20:00:18.885225 | orchestrator | 2026-02-23 20:00:18.885236 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 20:00:20.585762 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:20.585823 | orchestrator | 2026-02-23 20:00:20.585835 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-23 20:00:20.689467 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-23 20:00:20.689565 | orchestrator | 2026-02-23 20:00:20.689587 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-23 20:00:20.755834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-23 20:00:20.755924 | orchestrator | 2026-02-23 20:00:20.755939 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-23 20:00:21.820122 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-23 20:00:21.820213 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-23 20:00:21.820227 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-23 20:00:21.820239 | orchestrator | 2026-02-23 20:00:21.820252 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-23 20:00:23.563996 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-23 20:00:23.564069 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-23 20:00:23.564075 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-23 20:00:23.564080 | orchestrator | 2026-02-23 20:00:23.564085 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-23 20:00:24.181691 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-23 20:00:24.181791 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:24.181807 | orchestrator | 2026-02-23 20:00:24.181820 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-23 20:00:24.790347 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-23 20:00:24.790505 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:24.790552 | orchestrator | 2026-02-23 20:00:24.790567 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-23 20:00:24.856749 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:24.856841 | orchestrator | 2026-02-23 20:00:24.856858 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-23 20:00:25.213395 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:25.213521 | orchestrator | 2026-02-23 20:00:25.213542 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-23 20:00:25.289959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-23 20:00:25.290094 | orchestrator | 2026-02-23 20:00:25.290111 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-23 20:00:26.326380 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:26.326502 | orchestrator | 2026-02-23 20:00:26.326518 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-23 20:00:27.080843 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:27.080939 | orchestrator | 2026-02-23 20:00:27.080961 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-23 20:00:41.298630 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:41.298733 | orchestrator | 2026-02-23 20:00:41.298771 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-23 20:00:41.350702 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:41.350823 | orchestrator | 2026-02-23 20:00:41.350841 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-23 20:00:41.350853 | orchestrator | 2026-02-23 20:00:41.350865 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 20:00:43.092534 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:43.092649 | orchestrator | 2026-02-23 20:00:43.092700 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-23 20:00:43.210166 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-23 20:00:43.210268 | orchestrator | 2026-02-23 20:00:43.210285 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-23 20:00:43.268024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-23 20:00:43.268120 | orchestrator | 2026-02-23 20:00:43.268136 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-23 20:00:45.593906 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:45.594011 | orchestrator | 2026-02-23 20:00:45.594086 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-23 20:00:45.647239 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:45.647324 | orchestrator | 2026-02-23 20:00:45.647339 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-23 20:00:45.778379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-23 20:00:45.778515 | orchestrator | 2026-02-23 20:00:45.778534 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-23 20:00:48.551060 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-23 20:00:48.551179 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-23 20:00:48.551196 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-23 20:00:48.551208 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-23 20:00:48.551219 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-23 20:00:48.551230 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-23 20:00:48.551242 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-23 20:00:48.551269 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-23 20:00:48.552178 | orchestrator | 2026-02-23 20:00:48.552264 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-23 20:00:49.147535 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:49.147631 | orchestrator | 2026-02-23 20:00:49.147647 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-23 20:00:49.756005 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:49.756104 | orchestrator | 2026-02-23 20:00:49.756121 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-23 20:00:49.830516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-23 20:00:49.830608 | orchestrator | 2026-02-23 20:00:49.830622 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-23 20:00:50.980173 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-23 20:00:50.980238 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-23 20:00:50.980248 | orchestrator | 2026-02-23 20:00:50.980256 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-23 20:00:51.594550 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:51.594642 | orchestrator | 2026-02-23 20:00:51.594661 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-23 20:00:51.649074 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:51.649166 | orchestrator | 2026-02-23 20:00:51.649183 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-23 20:00:51.723190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-23 20:00:51.723278 | orchestrator | 2026-02-23 20:00:51.723294 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-23 20:00:52.352771 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:52.352890 | orchestrator | 2026-02-23 20:00:52.352908 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-23 20:00:52.399643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-23 20:00:52.399757 | orchestrator | 2026-02-23 20:00:52.399772 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-23 20:00:53.714735 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-23 20:00:53.714835 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-23 20:00:53.714867 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:53.714881 | orchestrator | 2026-02-23 20:00:53.714904 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-23 20:00:54.316563 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:54.316664 | orchestrator | 2026-02-23 20:00:54.316681 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-23 20:00:54.371569 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:54.371717 | orchestrator | 2026-02-23 20:00:54.371735 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-23 20:00:54.452904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-23 20:00:54.452991 | orchestrator | 2026-02-23 20:00:54.453005 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-23 20:00:54.981104 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:54.981204 | orchestrator | 2026-02-23 20:00:54.981219 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-23 20:00:55.363185 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:55.363277 | orchestrator | 2026-02-23 20:00:55.363294 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-23 20:00:56.529744 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-23 20:00:56.529838 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-23 20:00:56.529853 | orchestrator | 2026-02-23 20:00:56.529866 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-23 20:00:57.155334 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:57.155482 | orchestrator | 2026-02-23 20:00:57.155501 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-23 20:00:57.518649 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:57.518741 | orchestrator | 2026-02-23 20:00:57.518760 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-23 20:00:57.862289 | orchestrator | changed: [testbed-manager] 2026-02-23 20:00:57.862367 | orchestrator | 2026-02-23 20:00:57.862381 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-23 20:00:57.911557 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:00:57.911641 | orchestrator | 2026-02-23 20:00:57.911656 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-23 20:00:57.980310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-23 20:00:57.980466 | orchestrator | 2026-02-23 20:00:57.980496 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-23 20:00:58.021899 | orchestrator | ok: [testbed-manager] 2026-02-23 20:00:58.021988 | orchestrator | 2026-02-23 20:00:58.022003 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-23 20:00:59.933540 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-23 20:00:59.933645 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-23 20:00:59.933661 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-23 20:00:59.933673 | orchestrator | 2026-02-23 20:00:59.933685 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-23 20:01:00.609582 | orchestrator | changed: [testbed-manager] 2026-02-23 20:01:00.609672 | orchestrator | 2026-02-23 20:01:00.609687 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-23 20:01:01.305495 | orchestrator | changed: [testbed-manager] 2026-02-23 20:01:01.305578 | orchestrator | 2026-02-23 20:01:01.305593 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-23 20:01:02.016237 | orchestrator | changed: [testbed-manager] 2026-02-23 20:01:02.016331 | orchestrator | 2026-02-23 20:01:02.016351 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-23 20:01:02.090991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-23 20:01:02.091074 | orchestrator | 2026-02-23 20:01:02.091090 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-23 20:01:02.140859 | orchestrator | ok: [testbed-manager] 2026-02-23 20:01:02.140944 | orchestrator | 2026-02-23 20:01:02.140959 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-23 20:01:02.828323 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-23 20:01:02.828472 | orchestrator | 2026-02-23 20:01:02.828498 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-23 20:01:02.909246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-23 20:01:02.909344 | orchestrator | 2026-02-23 20:01:02.909366 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-23 20:01:03.603949 | orchestrator | changed: [testbed-manager] 2026-02-23 20:01:03.604041 | orchestrator | 2026-02-23 20:01:03.604056 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-23 20:01:04.184976 | orchestrator | ok: [testbed-manager] 2026-02-23 20:01:04.185059 | orchestrator | 2026-02-23 20:01:04.185068 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-23 20:01:04.241551 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:01:04.241648 | orchestrator | 2026-02-23 20:01:04.241674 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-23 20:01:04.294643 | orchestrator | ok: [testbed-manager] 2026-02-23 20:01:04.294694 | orchestrator | 2026-02-23 20:01:04.294701 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-23 20:01:05.101522 | orchestrator | changed: [testbed-manager] 2026-02-23 20:01:05.101620 | orchestrator | 2026-02-23 20:01:05.101638 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-23 20:02:12.871674 | orchestrator | changed: [testbed-manager] 2026-02-23 20:02:12.871773 | orchestrator | 2026-02-23 20:02:12.871790 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-23 20:02:13.870868 | orchestrator | ok: [testbed-manager] 2026-02-23 20:02:13.870964 | orchestrator | 2026-02-23 20:02:13.870981 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-23 20:02:13.931833 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:02:13.931917 | orchestrator | 2026-02-23 20:02:13.931933 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-23 20:02:18.351808 | orchestrator | changed: [testbed-manager] 2026-02-23 20:02:18.351902 | orchestrator | 2026-02-23 20:02:18.351919 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-23 20:02:18.450850 | orchestrator | ok: [testbed-manager] 2026-02-23 20:02:18.450944 | orchestrator | 2026-02-23 20:02:18.450984 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-23 20:02:18.450998 | orchestrator | 2026-02-23 20:02:18.451010 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-23 20:02:18.508573 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:02:18.508665 | orchestrator | 2026-02-23 20:02:18.508682 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-23 20:03:18.559146 | orchestrator | Pausing for 60 seconds 2026-02-23 20:03:18.559242 | orchestrator | changed: [testbed-manager] 2026-02-23 20:03:18.559258 | orchestrator | 2026-02-23 20:03:18.559270 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-23 20:03:21.806841 | orchestrator | changed: [testbed-manager] 2026-02-23 20:03:21.806915 | orchestrator | 2026-02-23 20:03:21.806922 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-23 20:04:03.293702 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-23 20:04:03.293850 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-23 20:04:03.293879 | orchestrator | changed: [testbed-manager] 2026-02-23 20:04:03.293962 | orchestrator | 2026-02-23 20:04:03.293986 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-23 20:04:13.047985 | orchestrator | changed: [testbed-manager] 2026-02-23 20:04:13.048068 | orchestrator | 2026-02-23 20:04:13.048082 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-23 20:04:13.130699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-23 20:04:13.130790 | orchestrator | 2026-02-23 20:04:13.130806 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-23 20:04:13.130831 | orchestrator | 2026-02-23 20:04:13.130843 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-23 20:04:13.161029 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:04:13.161122 | orchestrator | 2026-02-23 20:04:13.161141 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-23 20:04:13.213216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-23 20:04:13.213308 | orchestrator | 2026-02-23 20:04:13.213322 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-23 20:04:13.847293 | orchestrator | changed: [testbed-manager] 2026-02-23 20:04:13.847443 | orchestrator | 2026-02-23 20:04:13.847461 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-23 20:04:16.726296 | orchestrator | ok: [testbed-manager] 2026-02-23 20:04:16.726456 | orchestrator | 2026-02-23 20:04:16.726476 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-23 20:04:16.793738 | orchestrator | ok: [testbed-manager] => { 2026-02-23 20:04:16.793827 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-23 20:04:16.793843 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-23 20:04:16.793854 | orchestrator | "Checking running containers against expected versions...", 2026-02-23 20:04:16.793866 | orchestrator | "", 2026-02-23 20:04:16.793880 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-23 20:04:16.793891 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-23 20:04:16.793902 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.793914 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-23 20:04:16.793925 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.793936 | orchestrator | "", 2026-02-23 20:04:16.793947 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-23 20:04:16.793958 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-02-23 20:04:16.793969 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.793979 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-02-23 20:04:16.793990 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794001 | orchestrator | "", 2026-02-23 20:04:16.794012 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-23 20:04:16.794078 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-23 20:04:16.794089 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794100 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-23 20:04:16.794111 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794122 | orchestrator | "", 2026-02-23 20:04:16.794133 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-23 20:04:16.794143 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-23 20:04:16.794155 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794165 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-23 20:04:16.794176 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794187 | orchestrator | "", 2026-02-23 20:04:16.794198 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-23 20:04:16.794208 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-23 20:04:16.794244 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794255 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-23 20:04:16.794265 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794277 | orchestrator | "", 2026-02-23 20:04:16.794290 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-23 20:04:16.794303 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794315 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794328 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794365 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794379 | orchestrator | "", 2026-02-23 20:04:16.794391 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-23 20:04:16.794403 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-23 20:04:16.794415 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794428 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-23 20:04:16.794440 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794453 | orchestrator | "", 2026-02-23 20:04:16.794465 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-23 20:04:16.794477 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-23 20:04:16.794489 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794501 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-23 20:04:16.794514 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794526 | orchestrator | "", 2026-02-23 20:04:16.794546 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-23 20:04:16.794559 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-02-23 20:04:16.794576 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794590 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-02-23 20:04:16.794602 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794614 | orchestrator | "", 2026-02-23 20:04:16.794627 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-23 20:04:16.794638 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-23 20:04:16.794649 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794667 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-23 20:04:16.794685 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794696 | orchestrator | "", 2026-02-23 20:04:16.794707 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-23 20:04:16.794717 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794728 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794739 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794749 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794760 | orchestrator | "", 2026-02-23 20:04:16.794770 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-23 20:04:16.794781 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794792 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794802 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794813 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794823 | orchestrator | "", 2026-02-23 20:04:16.794834 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-23 20:04:16.794844 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794855 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794866 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794876 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794887 | orchestrator | "", 2026-02-23 20:04:16.794897 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-23 20:04:16.794908 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794918 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.794929 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.794948 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.794959 | orchestrator | "", 2026-02-23 20:04:16.794969 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-23 20:04:16.794998 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.795010 | orchestrator | " Enabled: true", 2026-02-23 20:04:16.795021 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-23 20:04:16.795031 | orchestrator | " Status: ✅ MATCH", 2026-02-23 20:04:16.795042 | orchestrator | "", 2026-02-23 20:04:16.795053 | orchestrator | "=== Summary ===", 2026-02-23 20:04:16.795063 | orchestrator | "Errors (version mismatches): 0", 2026-02-23 20:04:16.795074 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-23 20:04:16.795085 | orchestrator | "", 2026-02-23 20:04:16.795095 | orchestrator | "✅ All running containers match expected versions!" 2026-02-23 20:04:16.795106 | orchestrator | ] 2026-02-23 20:04:16.795117 | orchestrator | } 2026-02-23 20:04:16.795128 | orchestrator | 2026-02-23 20:04:16.795139 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-23 20:04:16.845498 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:04:16.845587 | orchestrator | 2026-02-23 20:04:16.845601 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:04:16.845614 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-23 20:04:16.845626 | orchestrator | 2026-02-23 20:04:16.908308 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-23 20:04:16.908424 | orchestrator | + deactivate 2026-02-23 20:04:16.908439 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-23 20:04:16.908455 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-23 20:04:16.908466 | orchestrator | + export PATH 2026-02-23 20:04:16.908477 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-23 20:04:16.908503 | orchestrator | + '[' -n '' ']' 2026-02-23 20:04:16.908524 | orchestrator | + hash -r 2026-02-23 20:04:16.908617 | orchestrator | + '[' -n '' ']' 2026-02-23 20:04:16.908632 | orchestrator | + unset VIRTUAL_ENV 2026-02-23 20:04:16.908642 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-23 20:04:16.908654 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-23 20:04:16.908664 | orchestrator | + unset -f deactivate 2026-02-23 20:04:16.908676 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-23 20:04:16.914401 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-23 20:04:16.914448 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-23 20:04:16.914460 | orchestrator | + local max_attempts=60 2026-02-23 20:04:16.914471 | orchestrator | + local name=ceph-ansible 2026-02-23 20:04:16.914482 | orchestrator | + local attempt_num=1 2026-02-23 20:04:16.915447 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:04:16.953815 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:04:16.953914 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-23 20:04:16.953930 | orchestrator | + local max_attempts=60 2026-02-23 20:04:16.953942 | orchestrator | + local name=kolla-ansible 2026-02-23 20:04:16.953954 | orchestrator | + local attempt_num=1 2026-02-23 20:04:16.954222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-23 20:04:16.986486 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:04:16.986570 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-23 20:04:16.986583 | orchestrator | + local max_attempts=60 2026-02-23 20:04:16.986595 | orchestrator | + local name=osism-ansible 2026-02-23 20:04:16.986606 | orchestrator | + local attempt_num=1 2026-02-23 20:04:16.986702 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-23 20:04:17.011155 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:04:17.011233 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-23 20:04:17.011247 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-23 20:04:17.627503 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-23 20:04:17.796957 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-23 20:04:17.797089 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797108 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797398 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-23 20:04:17.797424 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-02-23 20:04:17.797435 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797446 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797457 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-02-23 20:04:17.797481 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797698 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-02-23 20:04:17.797725 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797744 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-02-23 20:04:17.797756 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797766 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-23 20:04:17.797777 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.797788 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-02-23 20:04:17.804090 | orchestrator | ++ semver latest 7.0.0 2026-02-23 20:04:17.858541 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 20:04:17.858634 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 20:04:17.858651 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-23 20:04:17.862800 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-23 20:04:30.047195 | orchestrator | 2026-02-23 20:04:30 | INFO  | Prepare task for execution of resolvconf. 2026-02-23 20:04:30.220621 | orchestrator | 2026-02-23 20:04:30 | INFO  | Task 73369b71-82c3-4397-8ee1-68bf506a7a7f (resolvconf) was prepared for execution. 2026-02-23 20:04:30.220742 | orchestrator | 2026-02-23 20:04:30 | INFO  | It takes a moment until task 73369b71-82c3-4397-8ee1-68bf506a7a7f (resolvconf) has been started and output is visible here. 2026-02-23 20:04:43.351507 | orchestrator | 2026-02-23 20:04:43.351645 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-23 20:04:43.351673 | orchestrator | 2026-02-23 20:04:43.351693 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 20:04:43.351713 | orchestrator | Monday 23 February 2026 20:04:33 +0000 (0:00:00.101) 0:00:00.101 ******* 2026-02-23 20:04:43.351734 | orchestrator | ok: [testbed-manager] 2026-02-23 20:04:43.351753 | orchestrator | 2026-02-23 20:04:43.351772 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-23 20:04:43.351792 | orchestrator | Monday 23 February 2026 20:04:38 +0000 (0:00:04.424) 0:00:04.525 ******* 2026-02-23 20:04:43.351810 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:04:43.351830 | orchestrator | 2026-02-23 20:04:43.351849 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-23 20:04:43.351868 | orchestrator | Monday 23 February 2026 20:04:38 +0000 (0:00:00.066) 0:00:04.592 ******* 2026-02-23 20:04:43.351886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-23 20:04:43.351905 | orchestrator | 2026-02-23 20:04:43.351923 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-23 20:04:43.351944 | orchestrator | Monday 23 February 2026 20:04:38 +0000 (0:00:00.068) 0:00:04.660 ******* 2026-02-23 20:04:43.351978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-23 20:04:43.351999 | orchestrator | 2026-02-23 20:04:43.352017 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-23 20:04:43.352036 | orchestrator | Monday 23 February 2026 20:04:38 +0000 (0:00:00.073) 0:00:04.734 ******* 2026-02-23 20:04:43.352054 | orchestrator | ok: [testbed-manager] 2026-02-23 20:04:43.352074 | orchestrator | 2026-02-23 20:04:43.352095 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-23 20:04:43.352114 | orchestrator | Monday 23 February 2026 20:04:39 +0000 (0:00:00.879) 0:00:05.613 ******* 2026-02-23 20:04:43.352133 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:04:43.352152 | orchestrator | 2026-02-23 20:04:43.352170 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-23 20:04:43.352189 | orchestrator | Monday 23 February 2026 20:04:39 +0000 (0:00:00.054) 0:00:05.668 ******* 2026-02-23 20:04:43.352209 | orchestrator | ok: [testbed-manager] 2026-02-23 20:04:43.352228 | orchestrator | 2026-02-23 20:04:43.352248 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-23 20:04:43.352267 | orchestrator | Monday 23 February 2026 20:04:39 +0000 (0:00:00.441) 0:00:06.110 ******* 2026-02-23 20:04:43.352286 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:04:43.352303 | orchestrator | 2026-02-23 20:04:43.352322 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-23 20:04:43.352388 | orchestrator | Monday 23 February 2026 20:04:39 +0000 (0:00:00.071) 0:00:06.181 ******* 2026-02-23 20:04:43.352410 | orchestrator | changed: [testbed-manager] 2026-02-23 20:04:43.352430 | orchestrator | 2026-02-23 20:04:43.352448 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-23 20:04:43.352463 | orchestrator | Monday 23 February 2026 20:04:40 +0000 (0:00:00.471) 0:00:06.653 ******* 2026-02-23 20:04:43.352474 | orchestrator | changed: [testbed-manager] 2026-02-23 20:04:43.352485 | orchestrator | 2026-02-23 20:04:43.352496 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-23 20:04:43.352506 | orchestrator | Monday 23 February 2026 20:04:41 +0000 (0:00:00.969) 0:00:07.623 ******* 2026-02-23 20:04:43.352517 | orchestrator | ok: [testbed-manager] 2026-02-23 20:04:43.352528 | orchestrator | 2026-02-23 20:04:43.352561 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-23 20:04:43.352572 | orchestrator | Monday 23 February 2026 20:04:42 +0000 (0:00:00.834) 0:00:08.458 ******* 2026-02-23 20:04:43.352581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-23 20:04:43.352591 | orchestrator | 2026-02-23 20:04:43.352601 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-23 20:04:43.352610 | orchestrator | Monday 23 February 2026 20:04:42 +0000 (0:00:00.080) 0:00:08.539 ******* 2026-02-23 20:04:43.352619 | orchestrator | changed: [testbed-manager] 2026-02-23 20:04:43.352629 | orchestrator | 2026-02-23 20:04:43.352638 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:04:43.352650 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:04:43.352659 | orchestrator | 2026-02-23 20:04:43.352669 | orchestrator | 2026-02-23 20:04:43.352678 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:04:43.352688 | orchestrator | Monday 23 February 2026 20:04:43 +0000 (0:00:00.996) 0:00:09.535 ******* 2026-02-23 20:04:43.352697 | orchestrator | =============================================================================== 2026-02-23 20:04:43.352707 | orchestrator | Gathering Facts --------------------------------------------------------- 4.42s 2026-02-23 20:04:43.352716 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.00s 2026-02-23 20:04:43.352726 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.97s 2026-02-23 20:04:43.352735 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.88s 2026-02-23 20:04:43.352745 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.83s 2026-02-23 20:04:43.352754 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.47s 2026-02-23 20:04:43.352783 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.44s 2026-02-23 20:04:43.352793 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-23 20:04:43.352803 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-02-23 20:04:43.352812 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-02-23 20:04:43.352822 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-02-23 20:04:43.352831 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-23 20:04:43.352841 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-02-23 20:04:43.531423 | orchestrator | + osism apply sshconfig 2026-02-23 20:04:55.418962 | orchestrator | 2026-02-23 20:04:55 | INFO  | Prepare task for execution of sshconfig. 2026-02-23 20:04:55.482733 | orchestrator | 2026-02-23 20:04:55 | INFO  | Task b831e3d3-28b0-4683-ade3-f5375a128946 (sshconfig) was prepared for execution. 2026-02-23 20:04:55.482822 | orchestrator | 2026-02-23 20:04:55 | INFO  | It takes a moment until task b831e3d3-28b0-4683-ade3-f5375a128946 (sshconfig) has been started and output is visible here. 2026-02-23 20:05:05.738240 | orchestrator | 2026-02-23 20:05:05.738399 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-23 20:05:05.738418 | orchestrator | 2026-02-23 20:05:05.738430 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-23 20:05:05.738442 | orchestrator | Monday 23 February 2026 20:04:59 +0000 (0:00:00.130) 0:00:00.130 ******* 2026-02-23 20:05:05.738454 | orchestrator | ok: [testbed-manager] 2026-02-23 20:05:05.738466 | orchestrator | 2026-02-23 20:05:05.738477 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-23 20:05:05.738488 | orchestrator | Monday 23 February 2026 20:04:59 +0000 (0:00:00.510) 0:00:00.640 ******* 2026-02-23 20:05:05.738528 | orchestrator | changed: [testbed-manager] 2026-02-23 20:05:05.738540 | orchestrator | 2026-02-23 20:05:05.738551 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-23 20:05:05.738562 | orchestrator | Monday 23 February 2026 20:04:59 +0000 (0:00:00.381) 0:00:01.021 ******* 2026-02-23 20:05:05.738573 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-23 20:05:05.738584 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-23 20:05:05.738595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-23 20:05:05.738606 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-23 20:05:05.738617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-23 20:05:05.738628 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-23 20:05:05.738638 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-23 20:05:05.738649 | orchestrator | 2026-02-23 20:05:05.738660 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-23 20:05:05.738671 | orchestrator | Monday 23 February 2026 20:05:04 +0000 (0:00:05.053) 0:00:06.075 ******* 2026-02-23 20:05:05.738682 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:05:05.738693 | orchestrator | 2026-02-23 20:05:05.738703 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-23 20:05:05.738714 | orchestrator | Monday 23 February 2026 20:05:05 +0000 (0:00:00.076) 0:00:06.151 ******* 2026-02-23 20:05:05.738725 | orchestrator | changed: [testbed-manager] 2026-02-23 20:05:05.738736 | orchestrator | 2026-02-23 20:05:05.738747 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:05:05.738760 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:05:05.738771 | orchestrator | 2026-02-23 20:05:05.738783 | orchestrator | 2026-02-23 20:05:05.738796 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:05:05.738810 | orchestrator | Monday 23 February 2026 20:05:05 +0000 (0:00:00.505) 0:00:06.657 ******* 2026-02-23 20:05:05.738823 | orchestrator | =============================================================================== 2026-02-23 20:05:05.738836 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.05s 2026-02-23 20:05:05.738848 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-02-23 20:05:05.738861 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2026-02-23 20:05:05.738874 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.38s 2026-02-23 20:05:05.738887 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-23 20:05:05.947199 | orchestrator | + osism apply known-hosts 2026-02-23 20:05:17.759769 | orchestrator | 2026-02-23 20:05:17 | INFO  | Prepare task for execution of known-hosts. 2026-02-23 20:05:17.827852 | orchestrator | 2026-02-23 20:05:17 | INFO  | Task 3144c174-455a-4107-8fbb-614e45c1820b (known-hosts) was prepared for execution. 2026-02-23 20:05:17.827948 | orchestrator | 2026-02-23 20:05:17 | INFO  | It takes a moment until task 3144c174-455a-4107-8fbb-614e45c1820b (known-hosts) has been started and output is visible here. 2026-02-23 20:05:34.083261 | orchestrator | 2026-02-23 20:05:34.083409 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-23 20:05:34.083430 | orchestrator | 2026-02-23 20:05:34.083481 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-23 20:05:34.083497 | orchestrator | Monday 23 February 2026 20:05:21 +0000 (0:00:00.163) 0:00:00.163 ******* 2026-02-23 20:05:34.083509 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-23 20:05:34.083521 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-23 20:05:34.083532 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-23 20:05:34.083569 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-23 20:05:34.083581 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-23 20:05:34.083592 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-23 20:05:34.083603 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-23 20:05:34.083613 | orchestrator | 2026-02-23 20:05:34.083624 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-23 20:05:34.083636 | orchestrator | Monday 23 February 2026 20:05:27 +0000 (0:00:06.049) 0:00:06.213 ******* 2026-02-23 20:05:34.083659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-23 20:05:34.083674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-23 20:05:34.083686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-23 20:05:34.083696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-23 20:05:34.083707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-23 20:05:34.083718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-23 20:05:34.083729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-23 20:05:34.083740 | orchestrator | 2026-02-23 20:05:34.083751 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:34.083763 | orchestrator | Monday 23 February 2026 20:05:28 +0000 (0:00:00.193) 0:00:06.407 ******* 2026-02-23 20:05:34.083776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlV2p/5+dP6yAR0ay7DJhLTX3g4PWfGyMgQAk4ooX74w25AiTws5jLDbsnergxAnqLpcBS3TDfxVip57pclseQ=) 2026-02-23 20:05:34.083795 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4zGmCYC5M5b4OEbdFgAkRFM82QagPA/UO7mVi3jzrR+2WTp/NuUFplqefhfWqEPUJLoo8+CkVoXFFS1sebDN1/EWuGQThV3VAXtnbDhLa5Q57e1wNC0f/NIvFTJZx7hH4VCbRqvOgmnE6RY1ZsHGOQoYgISyN1igeyj17oYV761kEwa65HeVDuz7GDEt644dYMxwn/v4ZvVi8IiZk+8aiqXqzOHpBgrwBLO8ftBBrPGMfMeCneowrE/x3GGVKXPbkKV0jCIBWUzWmRPrNImYdjiA7ijYazKnatLdkdhEjvEqiIahj6/vPvC+KkLWWOUnbD30js/00pjF0R0xRdCYvV4+JFgwW/+duwYY4Ny9XTu5eJgrVbgYQ/h6dxVrvS5Dd0HwhSnTEdvHalJsxH7xdovxBmI0aFMx8WUcb4PmtAQ2Sx+9plC4eQ+3NHZzRhzJQFCw0C82L/9LRsraiBpAp/bH7nwcmPWyu/Ek6EnqvXVs3atKtJL8vj0oeWPK4Eps=) 2026-02-23 20:05:34.083812 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOFjjSL830wCqS6nSt6a0VcVDXp1NUSFuiVfjY748BBj) 2026-02-23 20:05:34.083826 | orchestrator | 2026-02-23 20:05:34.083839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:34.083852 | orchestrator | Monday 23 February 2026 20:05:29 +0000 (0:00:01.180) 0:00:07.588 ******* 2026-02-23 20:05:34.083864 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHmoIV/YTV7PU/TxAmCP6d0H02GKHgL6Ky6RtxnjXK5QyyoO7dBbBD1Vl0grhLBPpynrsM//ICkx3k+ZtOUfTCc=) 2026-02-23 20:05:34.083878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL2wX2aZepKfuBOD2MdpgkAMdOLpOaFV1ka/R+PAjk21) 2026-02-23 20:05:34.083928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfd2H8GAvZ6Qo3pNqHU6C+C3ym7Rj0ax70XCs71SFm65AmouqEegLq3bfCUAcDP2AuE/n7ZjeWMDSL3fSLviEtriVfoOCES7mTu+EdLoWcCokY1LjAMSNILGbakCUjhEyWmrZiy/znvVhgKAi9nwH80Bo2fbl71+pMQ+5AObPdQA+0WLWYMsx4p1F3tdTdXiEBcLLk6y/Psjp5UhXYeVFtxpVOrx41i9f+ppgtQt4Fh+VvPh0dSDCBL78H0OiekAQyCBf2BKs3K3ue1sKfxHalk8SkOk+2PY8pCbWi0tzuJh+OPpa4eSHkQCxlsDztQVOkNJ9Rx1XG01kM+908EBAcdb0HJQSa7r4MnarN46oAT4VzkBPNzSOAjulKr0Z0ZGD3Y3mDByydmJwrXbpX13UusQXFyMqYV62TCdCFSAurTzwpaIupTcLaWUCvqerDQzX0pZ5O3ItY/JaQ86bgb007a6ISDJvPbuxAnLV9Mta04/BCQjGdKj5ce+sdX5GvgwM=) 2026-02-23 20:05:34.083944 | orchestrator | 2026-02-23 20:05:34.083958 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:34.083970 | orchestrator | Monday 23 February 2026 20:05:30 +0000 (0:00:01.116) 0:00:08.704 ******* 2026-02-23 20:05:34.083984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRU6eJ/9rxvG7D381wUdwZ5Hmek7uKQW101gG85ARDWyGjj5dGp12GEnq5CcvSc+czGcgZY6jqs1Mbx5gt2gc7Yb3i23oasKiJ4fkAn4M93adRRVqiH0nqJu158E79SuTefnrTJoGBDOIs/6rYkpN66KPGko9Uw8M9GG5VPZTC0F3xRDfQeWsdVn5jlkjkIYcRv4F6ItXg+BcogwABfqkt0oxrJ6l52OaVbIATJAXa+lJjE1ERH0mi4LpOCVglOmRDpp2vRrRL4Q2n9Iry8ZOo7E2+JtmhcJvx624ILWPOOyztkAWa+M8VYy9VvWTDMN/8bpFepQykWzrlFtAeBRvvE0vr/aoE11lBPOtorMAvYD/jsE3ZrQV37SYrvj1aNg6lq7gu4hoAH0pu9w+Zny2wzjpQ/t6bvaKNw5H/u+ZSqv/sYoilUezal0b5eAhTNUP6WOcoESdpdoQHbfxzm5gCm84lmfccHiSfyJvYLB/P/cMyMokzxsrxfaG5Cud+Jg0=) 2026-02-23 20:05:34.083998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFJlJ4hmKNvUsFkOjzXY9Y/RIZxh/A80tWqRSIaFwnplByd1gDlrtI3Lg0G63grpS+FKhya3A5HQq7XHW6PhNho=) 2026-02-23 20:05:34.084081 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHX2oadnOCx8KanEZYbvGiLo4eutBTn+WVrGyz4rcoOO) 2026-02-23 20:05:34.084095 | orchestrator | 2026-02-23 20:05:34.084108 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:34.084119 | orchestrator | Monday 23 February 2026 20:05:31 +0000 (0:00:01.076) 0:00:09.780 ******* 2026-02-23 20:05:34.084135 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp7ysGKiCkT8wTAyouwdc7WeKD7Pc801SOmZqcpr+J79UA0G57GHSZeYyFmuqubguJ/fnUpV5gsY+2loKJqze4qoizfMh+46H7fKL5piwa0znA/VQxyIeUB/qy0m8zTnVtHoprBDN7vEtfLoed8AZS8Mu6EOLHegZtut/9MIhWBwhMf5miPuFgLSR49Ah87+Nlf0+EjKC65+ysmbXdsLupFK2WUwISqvp/9uoEjbUKjzhKWA4zKDqu7WSdvJRjoL0zPVPxpQ6mkj29+NOJfBRV0GHRO8tu2efnpg0BiMu6fTbQ8vyzCh2KklcTqiEyLMqb/Yq/OMC+RuKsXJM68oRv0TVcqJsPTBs17CCodTuatCrWqThM4iOczzR3xUxGOQsUdwbBhaSDfGmLBYcp44R5dtTpwO/SPrFopNXs+GMeIIp6kZ5AYtMJuzAhjX1q4boBEuJ2yIn/1pwCn4sYRW1XUU2w2CdW4iRW2GFda7411JsGMaoKuA/uj3/OcrkjXis=) 2026-02-23 20:05:34.084147 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAKfvLX8/u3dQMlLyO/uEro7YSxCBN1eRWvvOGkOKCRIQSsTe3fxx9IHtu/JM0ieQzDkkzVdlYQXznnEFlnQg4=) 2026-02-23 20:05:34.084160 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGS3CCLk8tlqcQQq0smjRfcGH++2lte1GqFuDR2vgZV9) 2026-02-23 20:05:34.084171 | orchestrator | 2026-02-23 20:05:34.084183 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:34.084194 | orchestrator | Monday 23 February 2026 20:05:32 +0000 (0:00:01.095) 0:00:10.876 ******* 2026-02-23 20:05:34.084205 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAHV0XZbRzhHH6yzduPKCmOL2/2szLfM9Lq0WYlfnyiI) 2026-02-23 20:05:34.084216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzbfI/njtF/ZO+1FSoLg10pSoMxDBOL8Re1nnNuZ4I3sUA2XM12HyabbIV0ZZHaXvTVeaaQnzNZZtHdmLJLVqyAcjnGgTiuKYVc+issJ+iywlnU4b9Kzom1+Pz29SAlSXVVeiUOThc3o7OFN7wlQLkW5fOWyVqt0L8iZO0YaEC0Hs7ZMn6Ctg3Q1bTCGfELpIemVpE+7dpbzV/TJtrWxbkreMkwaiYcEl9dC75TBP/A8H48vv7XpfT7kme/T4Ye1B4DWu1FMV4C4JExDCO9IubnsRQIDmuY0TVwqFaCm6GBug9Jw/ATJ4zt7z+VSakjAjZB8zHkQqbMYAM8xGwWxoHhQvdZK+MqdpY4diEQAItDUNCBMqneacjyYkPS0i5lzPxY56Xhmb991fpIWM+BuG2MVFccllElnJgUScNaqLt4QImjjWjbu6l3oTGM0//H/Xl5DWrDNZ3IUI0h+vWZf1erJmmAp1f119nJpW89J1xZi1uj/VURGXCvjXyDuyEhmU=) 2026-02-23 20:05:34.084235 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/UME+F747jcqRKqRKBGzjd9ow/l5kgbPYFXAuUCnASWEXBLtV6DVflGsDv1I1NovR9l7f+qaJ2vgLqcyL3jfM=) 2026-02-23 20:05:34.084246 | orchestrator | 2026-02-23 20:05:34.084258 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:34.084269 | orchestrator | Monday 23 February 2026 20:05:33 +0000 (0:00:01.100) 0:00:11.977 ******* 2026-02-23 20:05:34.084287 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG6a+EmnzAbXvgNwL6Db+zfzpvvVTlTOUwEX7hZBVtIV9pPDm3ao5LLXgRjihR2VA4wUt7hRQ+mGIPpRcYVPNdY=) 2026-02-23 20:05:44.870945 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDuPYrvqyM3GQ+zEWWnalXgJG4SY3X4tqvG6bQUlck+H) 2026-02-23 20:05:44.871062 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCG2dCAIvjwDby9Y5Y33u9nOvmdGei2SNeP6mfYo+bVT/qAQiUJdMz6ITxD+Lhig9Jo2oJKYq9Ci88kdkuIdcoGOad48SclSesGj5hKtNpQIOn5UFPWtNR5SEyJ1azoA1BeKV0HaLKiwo3ZKJJVjEWxkjkntdDFv9ql1aywBvWokTFFeew2YW4/uV489cWuzsMeF2Zc+CWkGNrtTCburZOTyYfTabmjiVbIHG1FX5c1lFHQZgX6JO7EokXCVL9frAUP9rbMSb/vmGgSY6/Z3tWb91Xqb41RvplQUahScqUja6HNIvfiSwhbBly2QI7nHgPq47kuVKVBV/CF+Sy9wadC8l1U5OYf5l2JL5pA2oUz0MZx0EyLkTWV+jG87y0sipGeK2c/ea5dDyK59ioMmlWtR7pIV1ims9rM6j/l4pFCtuDQTR/iU+m248nk8D0pN36plkft/T9W2VyqQDTjX50t76wmAbjg7wJ8H1RsgdzqfDXVh2Xl0BL6osg2WYns4rM=) 2026-02-23 20:05:44.871084 | orchestrator | 2026-02-23 20:05:44.871097 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:44.871110 | orchestrator | Monday 23 February 2026 20:05:34 +0000 (0:00:01.110) 0:00:13.087 ******* 2026-02-23 20:05:44.871121 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1yvmmCwdowFt9s6Iwo/jydD91gTnu8B+2bXvrmG6XgtIy2GZWmhp81UFljsNymXUWyt9lk7zFj1a7YNkBlDhBB8rK7I/VBlVYiXhAezA42WqNp5dANSanHYvsVqakyVh0hrOYpIduQu5eiT4/GTKcj8xJ2/GNVFIORmUFB6g7sZ/K1oMmnvI+TLwV61TZbhvgw2dRqocVKUc6IhzoJ5MR8e8ubLvK8EOssJrzN4rNyhKwilJNxMcx77W6gLz1FzC9dqG/y26fpA/4mrwiX1JQfPZ5lMlo0m0c4Y3oZw6g/o0V7rCLu63qOpC06t7TnFhFSoktigb+az1xBfHf/Ogt2ACcg3vLEUm9dhKpV5rXS9OSjrvwJRsIBnK4l4FEV2OlCGWgVsjqaBy+VIWZ+3Bq5JWU4B5luPfTfuiB91P+Zsr1oiJtgSrzrOUYdvYnOP0jFK661i3gFTnL1N2GsOX00MnMSnbeMF6p7Y0GUI81JYOv+tlOPL6ikZyaH5oRrpE=) 2026-02-23 20:05:44.871133 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEql69o4Re6qj44kwhoUCXT8usLY0P+oaNjsII5dAID7) 2026-02-23 20:05:44.871145 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQ4Pbn1Bcrc3w8lH6B9kjwKbz76JItQtYuL0yNSyyKhRbLzmHJLlGS+oNp+8I2AGSrmesomQvKutUHICbWGliI=) 2026-02-23 20:05:44.871157 | orchestrator | 2026-02-23 20:05:44.871169 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-23 20:05:44.871180 | orchestrator | Monday 23 February 2026 20:05:35 +0000 (0:00:01.092) 0:00:14.179 ******* 2026-02-23 20:05:44.871191 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-23 20:05:44.871203 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-23 20:05:44.871213 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-23 20:05:44.871224 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-23 20:05:44.871234 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-23 20:05:44.871263 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-23 20:05:44.871275 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-23 20:05:44.871308 | orchestrator | 2026-02-23 20:05:44.871320 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-23 20:05:44.871382 | orchestrator | Monday 23 February 2026 20:05:41 +0000 (0:00:05.280) 0:00:19.460 ******* 2026-02-23 20:05:44.871394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-23 20:05:44.871407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-23 20:05:44.871418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-23 20:05:44.871429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-23 20:05:44.871440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-23 20:05:44.871451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-23 20:05:44.871464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-23 20:05:44.871476 | orchestrator | 2026-02-23 20:05:44.871507 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:44.871520 | orchestrator | Monday 23 February 2026 20:05:41 +0000 (0:00:00.183) 0:00:19.644 ******* 2026-02-23 20:05:44.871536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4zGmCYC5M5b4OEbdFgAkRFM82QagPA/UO7mVi3jzrR+2WTp/NuUFplqefhfWqEPUJLoo8+CkVoXFFS1sebDN1/EWuGQThV3VAXtnbDhLa5Q57e1wNC0f/NIvFTJZx7hH4VCbRqvOgmnE6RY1ZsHGOQoYgISyN1igeyj17oYV761kEwa65HeVDuz7GDEt644dYMxwn/v4ZvVi8IiZk+8aiqXqzOHpBgrwBLO8ftBBrPGMfMeCneowrE/x3GGVKXPbkKV0jCIBWUzWmRPrNImYdjiA7ijYazKnatLdkdhEjvEqiIahj6/vPvC+KkLWWOUnbD30js/00pjF0R0xRdCYvV4+JFgwW/+duwYY4Ny9XTu5eJgrVbgYQ/h6dxVrvS5Dd0HwhSnTEdvHalJsxH7xdovxBmI0aFMx8WUcb4PmtAQ2Sx+9plC4eQ+3NHZzRhzJQFCw0C82L/9LRsraiBpAp/bH7nwcmPWyu/Ek6EnqvXVs3atKtJL8vj0oeWPK4Eps=) 2026-02-23 20:05:44.871549 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlV2p/5+dP6yAR0ay7DJhLTX3g4PWfGyMgQAk4ooX74w25AiTws5jLDbsnergxAnqLpcBS3TDfxVip57pclseQ=) 2026-02-23 20:05:44.871562 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOFjjSL830wCqS6nSt6a0VcVDXp1NUSFuiVfjY748BBj) 2026-02-23 20:05:44.871575 | orchestrator | 2026-02-23 20:05:44.871587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:44.871599 | orchestrator | Monday 23 February 2026 20:05:42 +0000 (0:00:01.022) 0:00:20.666 ******* 2026-02-23 20:05:44.871612 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfd2H8GAvZ6Qo3pNqHU6C+C3ym7Rj0ax70XCs71SFm65AmouqEegLq3bfCUAcDP2AuE/n7ZjeWMDSL3fSLviEtriVfoOCES7mTu+EdLoWcCokY1LjAMSNILGbakCUjhEyWmrZiy/znvVhgKAi9nwH80Bo2fbl71+pMQ+5AObPdQA+0WLWYMsx4p1F3tdTdXiEBcLLk6y/Psjp5UhXYeVFtxpVOrx41i9f+ppgtQt4Fh+VvPh0dSDCBL78H0OiekAQyCBf2BKs3K3ue1sKfxHalk8SkOk+2PY8pCbWi0tzuJh+OPpa4eSHkQCxlsDztQVOkNJ9Rx1XG01kM+908EBAcdb0HJQSa7r4MnarN46oAT4VzkBPNzSOAjulKr0Z0ZGD3Y3mDByydmJwrXbpX13UusQXFyMqYV62TCdCFSAurTzwpaIupTcLaWUCvqerDQzX0pZ5O3ItY/JaQ86bgb007a6ISDJvPbuxAnLV9Mta04/BCQjGdKj5ce+sdX5GvgwM=) 2026-02-23 20:05:44.871634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHmoIV/YTV7PU/TxAmCP6d0H02GKHgL6Ky6RtxnjXK5QyyoO7dBbBD1Vl0grhLBPpynrsM//ICkx3k+ZtOUfTCc=) 2026-02-23 20:05:44.871646 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL2wX2aZepKfuBOD2MdpgkAMdOLpOaFV1ka/R+PAjk21) 2026-02-23 20:05:44.871659 | orchestrator | 2026-02-23 20:05:44.871671 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:44.871684 | orchestrator | Monday 23 February 2026 20:05:43 +0000 (0:00:01.120) 0:00:21.786 ******* 2026-02-23 20:05:44.871697 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRU6eJ/9rxvG7D381wUdwZ5Hmek7uKQW101gG85ARDWyGjj5dGp12GEnq5CcvSc+czGcgZY6jqs1Mbx5gt2gc7Yb3i23oasKiJ4fkAn4M93adRRVqiH0nqJu158E79SuTefnrTJoGBDOIs/6rYkpN66KPGko9Uw8M9GG5VPZTC0F3xRDfQeWsdVn5jlkjkIYcRv4F6ItXg+BcogwABfqkt0oxrJ6l52OaVbIATJAXa+lJjE1ERH0mi4LpOCVglOmRDpp2vRrRL4Q2n9Iry8ZOo7E2+JtmhcJvx624ILWPOOyztkAWa+M8VYy9VvWTDMN/8bpFepQykWzrlFtAeBRvvE0vr/aoE11lBPOtorMAvYD/jsE3ZrQV37SYrvj1aNg6lq7gu4hoAH0pu9w+Zny2wzjpQ/t6bvaKNw5H/u+ZSqv/sYoilUezal0b5eAhTNUP6WOcoESdpdoQHbfxzm5gCm84lmfccHiSfyJvYLB/P/cMyMokzxsrxfaG5Cud+Jg0=) 2026-02-23 20:05:44.871710 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFJlJ4hmKNvUsFkOjzXY9Y/RIZxh/A80tWqRSIaFwnplByd1gDlrtI3Lg0G63grpS+FKhya3A5HQq7XHW6PhNho=) 2026-02-23 20:05:44.871723 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHX2oadnOCx8KanEZYbvGiLo4eutBTn+WVrGyz4rcoOO) 2026-02-23 20:05:44.871734 | orchestrator | 2026-02-23 20:05:44.871745 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:44.871756 | orchestrator | Monday 23 February 2026 20:05:44 +0000 (0:00:01.001) 0:00:22.787 ******* 2026-02-23 20:05:44.871782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp7ysGKiCkT8wTAyouwdc7WeKD7Pc801SOmZqcpr+J79UA0G57GHSZeYyFmuqubguJ/fnUpV5gsY+2loKJqze4qoizfMh+46H7fKL5piwa0znA/VQxyIeUB/qy0m8zTnVtHoprBDN7vEtfLoed8AZS8Mu6EOLHegZtut/9MIhWBwhMf5miPuFgLSR49Ah87+Nlf0+EjKC65+ysmbXdsLupFK2WUwISqvp/9uoEjbUKjzhKWA4zKDqu7WSdvJRjoL0zPVPxpQ6mkj29+NOJfBRV0GHRO8tu2efnpg0BiMu6fTbQ8vyzCh2KklcTqiEyLMqb/Yq/OMC+RuKsXJM68oRv0TVcqJsPTBs17CCodTuatCrWqThM4iOczzR3xUxGOQsUdwbBhaSDfGmLBYcp44R5dtTpwO/SPrFopNXs+GMeIIp6kZ5AYtMJuzAhjX1q4boBEuJ2yIn/1pwCn4sYRW1XUU2w2CdW4iRW2GFda7411JsGMaoKuA/uj3/OcrkjXis=) 2026-02-23 20:05:49.790805 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAKfvLX8/u3dQMlLyO/uEro7YSxCBN1eRWvvOGkOKCRIQSsTe3fxx9IHtu/JM0ieQzDkkzVdlYQXznnEFlnQg4=) 2026-02-23 20:05:49.790903 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGS3CCLk8tlqcQQq0smjRfcGH++2lte1GqFuDR2vgZV9) 2026-02-23 20:05:49.790918 | orchestrator | 2026-02-23 20:05:49.790930 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:49.790941 | orchestrator | Monday 23 February 2026 20:05:45 +0000 (0:00:00.977) 0:00:23.765 ******* 2026-02-23 20:05:49.790953 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzbfI/njtF/ZO+1FSoLg10pSoMxDBOL8Re1nnNuZ4I3sUA2XM12HyabbIV0ZZHaXvTVeaaQnzNZZtHdmLJLVqyAcjnGgTiuKYVc+issJ+iywlnU4b9Kzom1+Pz29SAlSXVVeiUOThc3o7OFN7wlQLkW5fOWyVqt0L8iZO0YaEC0Hs7ZMn6Ctg3Q1bTCGfELpIemVpE+7dpbzV/TJtrWxbkreMkwaiYcEl9dC75TBP/A8H48vv7XpfT7kme/T4Ye1B4DWu1FMV4C4JExDCO9IubnsRQIDmuY0TVwqFaCm6GBug9Jw/ATJ4zt7z+VSakjAjZB8zHkQqbMYAM8xGwWxoHhQvdZK+MqdpY4diEQAItDUNCBMqneacjyYkPS0i5lzPxY56Xhmb991fpIWM+BuG2MVFccllElnJgUScNaqLt4QImjjWjbu6l3oTGM0//H/Xl5DWrDNZ3IUI0h+vWZf1erJmmAp1f119nJpW89J1xZi1uj/VURGXCvjXyDuyEhmU=) 2026-02-23 20:05:49.790966 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/UME+F747jcqRKqRKBGzjd9ow/l5kgbPYFXAuUCnASWEXBLtV6DVflGsDv1I1NovR9l7f+qaJ2vgLqcyL3jfM=) 2026-02-23 20:05:49.790999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAHV0XZbRzhHH6yzduPKCmOL2/2szLfM9Lq0WYlfnyiI) 2026-02-23 20:05:49.791014 | orchestrator | 2026-02-23 20:05:49.791051 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:49.791077 | orchestrator | Monday 23 February 2026 20:05:46 +0000 (0:00:00.999) 0:00:24.764 ******* 2026-02-23 20:05:49.791094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG6a+EmnzAbXvgNwL6Db+zfzpvvVTlTOUwEX7hZBVtIV9pPDm3ao5LLXgRjihR2VA4wUt7hRQ+mGIPpRcYVPNdY=) 2026-02-23 20:05:49.791112 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCG2dCAIvjwDby9Y5Y33u9nOvmdGei2SNeP6mfYo+bVT/qAQiUJdMz6ITxD+Lhig9Jo2oJKYq9Ci88kdkuIdcoGOad48SclSesGj5hKtNpQIOn5UFPWtNR5SEyJ1azoA1BeKV0HaLKiwo3ZKJJVjEWxkjkntdDFv9ql1aywBvWokTFFeew2YW4/uV489cWuzsMeF2Zc+CWkGNrtTCburZOTyYfTabmjiVbIHG1FX5c1lFHQZgX6JO7EokXCVL9frAUP9rbMSb/vmGgSY6/Z3tWb91Xqb41RvplQUahScqUja6HNIvfiSwhbBly2QI7nHgPq47kuVKVBV/CF+Sy9wadC8l1U5OYf5l2JL5pA2oUz0MZx0EyLkTWV+jG87y0sipGeK2c/ea5dDyK59ioMmlWtR7pIV1ims9rM6j/l4pFCtuDQTR/iU+m248nk8D0pN36plkft/T9W2VyqQDTjX50t76wmAbjg7wJ8H1RsgdzqfDXVh2Xl0BL6osg2WYns4rM=) 2026-02-23 20:05:49.791129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDuPYrvqyM3GQ+zEWWnalXgJG4SY3X4tqvG6bQUlck+H) 2026-02-23 20:05:49.791145 | orchestrator | 2026-02-23 20:05:49.791162 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-23 20:05:49.791178 | orchestrator | Monday 23 February 2026 20:05:47 +0000 (0:00:01.000) 0:00:25.765 ******* 2026-02-23 20:05:49.791195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1yvmmCwdowFt9s6Iwo/jydD91gTnu8B+2bXvrmG6XgtIy2GZWmhp81UFljsNymXUWyt9lk7zFj1a7YNkBlDhBB8rK7I/VBlVYiXhAezA42WqNp5dANSanHYvsVqakyVh0hrOYpIduQu5eiT4/GTKcj8xJ2/GNVFIORmUFB6g7sZ/K1oMmnvI+TLwV61TZbhvgw2dRqocVKUc6IhzoJ5MR8e8ubLvK8EOssJrzN4rNyhKwilJNxMcx77W6gLz1FzC9dqG/y26fpA/4mrwiX1JQfPZ5lMlo0m0c4Y3oZw6g/o0V7rCLu63qOpC06t7TnFhFSoktigb+az1xBfHf/Ogt2ACcg3vLEUm9dhKpV5rXS9OSjrvwJRsIBnK4l4FEV2OlCGWgVsjqaBy+VIWZ+3Bq5JWU4B5luPfTfuiB91P+Zsr1oiJtgSrzrOUYdvYnOP0jFK661i3gFTnL1N2GsOX00MnMSnbeMF6p7Y0GUI81JYOv+tlOPL6ikZyaH5oRrpE=) 2026-02-23 20:05:49.791214 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQ4Pbn1Bcrc3w8lH6B9kjwKbz76JItQtYuL0yNSyyKhRbLzmHJLlGS+oNp+8I2AGSrmesomQvKutUHICbWGliI=) 2026-02-23 20:05:49.791231 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEql69o4Re6qj44kwhoUCXT8usLY0P+oaNjsII5dAID7) 2026-02-23 20:05:49.791248 | orchestrator | 2026-02-23 20:05:49.791265 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-23 20:05:49.791282 | orchestrator | Monday 23 February 2026 20:05:48 +0000 (0:00:01.056) 0:00:26.821 ******* 2026-02-23 20:05:49.791301 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-23 20:05:49.791318 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-23 20:05:49.791459 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-23 20:05:49.791473 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-23 20:05:49.791484 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-23 20:05:49.791496 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-23 20:05:49.791506 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-23 20:05:49.791518 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:05:49.791529 | orchestrator | 2026-02-23 20:05:49.791540 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-23 20:05:49.791551 | orchestrator | Monday 23 February 2026 20:05:48 +0000 (0:00:00.175) 0:00:26.997 ******* 2026-02-23 20:05:49.791574 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:05:49.791585 | orchestrator | 2026-02-23 20:05:49.791594 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-23 20:05:49.791604 | orchestrator | Monday 23 February 2026 20:05:48 +0000 (0:00:00.055) 0:00:27.053 ******* 2026-02-23 20:05:49.791614 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:05:49.791623 | orchestrator | 2026-02-23 20:05:49.791633 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-23 20:05:49.791642 | orchestrator | Monday 23 February 2026 20:05:48 +0000 (0:00:00.051) 0:00:27.104 ******* 2026-02-23 20:05:49.791652 | orchestrator | changed: [testbed-manager] 2026-02-23 20:05:49.791661 | orchestrator | 2026-02-23 20:05:49.791670 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:05:49.791680 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:05:49.791692 | orchestrator | 2026-02-23 20:05:49.791701 | orchestrator | 2026-02-23 20:05:49.791711 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:05:49.791720 | orchestrator | Monday 23 February 2026 20:05:49 +0000 (0:00:00.760) 0:00:27.865 ******* 2026-02-23 20:05:49.791730 | orchestrator | =============================================================================== 2026-02-23 20:05:49.791739 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.05s 2026-02-23 20:05:49.791749 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2026-02-23 20:05:49.791759 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-23 20:05:49.791769 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-23 20:05:49.791778 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-23 20:05:49.791788 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-23 20:05:49.791797 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-23 20:05:49.791806 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-23 20:05:49.791816 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-23 20:05:49.791826 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-23 20:05:49.791835 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-23 20:05:49.791855 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-23 20:05:49.791877 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-23 20:05:49.791887 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-23 20:05:49.791896 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-23 20:05:49.791906 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-23 20:05:49.791915 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.76s 2026-02-23 20:05:49.791925 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2026-02-23 20:05:49.791934 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-23 20:05:49.791944 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-02-23 20:05:50.076692 | orchestrator | + osism apply squid 2026-02-23 20:06:02.238079 | orchestrator | 2026-02-23 20:06:02 | INFO  | Prepare task for execution of squid. 2026-02-23 20:06:02.303479 | orchestrator | 2026-02-23 20:06:02 | INFO  | Task 3b4f43d5-caa5-4661-aff6-c143bbf64de3 (squid) was prepared for execution. 2026-02-23 20:06:02.303594 | orchestrator | 2026-02-23 20:06:02 | INFO  | It takes a moment until task 3b4f43d5-caa5-4661-aff6-c143bbf64de3 (squid) has been started and output is visible here. 2026-02-23 20:07:56.377105 | orchestrator | 2026-02-23 20:07:56.377258 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-23 20:07:56.377287 | orchestrator | 2026-02-23 20:07:56.377308 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-23 20:07:56.377326 | orchestrator | Monday 23 February 2026 20:06:06 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-02-23 20:07:56.377344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-23 20:07:56.377364 | orchestrator | 2026-02-23 20:07:56.377382 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-23 20:07:56.377399 | orchestrator | Monday 23 February 2026 20:06:06 +0000 (0:00:00.101) 0:00:00.271 ******* 2026-02-23 20:07:56.377418 | orchestrator | ok: [testbed-manager] 2026-02-23 20:07:56.377438 | orchestrator | 2026-02-23 20:07:56.377457 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-23 20:07:56.377475 | orchestrator | Monday 23 February 2026 20:06:07 +0000 (0:00:01.604) 0:00:01.875 ******* 2026-02-23 20:07:56.377493 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-23 20:07:56.377511 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-23 20:07:56.377528 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-23 20:07:56.377546 | orchestrator | 2026-02-23 20:07:56.377564 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-23 20:07:56.377582 | orchestrator | Monday 23 February 2026 20:06:09 +0000 (0:00:01.194) 0:00:03.070 ******* 2026-02-23 20:07:56.377601 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-23 20:07:56.377620 | orchestrator | 2026-02-23 20:07:56.377640 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-23 20:07:56.377662 | orchestrator | Monday 23 February 2026 20:06:10 +0000 (0:00:01.023) 0:00:04.093 ******* 2026-02-23 20:07:56.377682 | orchestrator | ok: [testbed-manager] 2026-02-23 20:07:56.377700 | orchestrator | 2026-02-23 20:07:56.377721 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-23 20:07:56.377738 | orchestrator | Monday 23 February 2026 20:06:10 +0000 (0:00:00.372) 0:00:04.466 ******* 2026-02-23 20:07:56.377755 | orchestrator | changed: [testbed-manager] 2026-02-23 20:07:56.377773 | orchestrator | 2026-02-23 20:07:56.377790 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-23 20:07:56.377855 | orchestrator | Monday 23 February 2026 20:06:11 +0000 (0:00:00.883) 0:00:05.350 ******* 2026-02-23 20:07:56.377876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-23 20:07:56.377896 | orchestrator | ok: [testbed-manager] 2026-02-23 20:07:56.377917 | orchestrator | 2026-02-23 20:07:56.377936 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-23 20:07:56.377956 | orchestrator | Monday 23 February 2026 20:06:43 +0000 (0:00:32.392) 0:00:37.742 ******* 2026-02-23 20:07:56.377975 | orchestrator | changed: [testbed-manager] 2026-02-23 20:07:56.377995 | orchestrator | 2026-02-23 20:07:56.378014 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-23 20:07:56.378116 | orchestrator | Monday 23 February 2026 20:06:55 +0000 (0:00:11.692) 0:00:49.435 ******* 2026-02-23 20:07:56.378138 | orchestrator | Pausing for 60 seconds 2026-02-23 20:07:56.378158 | orchestrator | changed: [testbed-manager] 2026-02-23 20:07:56.378178 | orchestrator | 2026-02-23 20:07:56.378197 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-23 20:07:56.378214 | orchestrator | Monday 23 February 2026 20:07:55 +0000 (0:01:00.079) 0:01:49.514 ******* 2026-02-23 20:07:56.378232 | orchestrator | ok: [testbed-manager] 2026-02-23 20:07:56.378250 | orchestrator | 2026-02-23 20:07:56.378269 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-23 20:07:56.378323 | orchestrator | Monday 23 February 2026 20:07:55 +0000 (0:00:00.064) 0:01:49.579 ******* 2026-02-23 20:07:56.378343 | orchestrator | changed: [testbed-manager] 2026-02-23 20:07:56.378361 | orchestrator | 2026-02-23 20:07:56.378378 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:07:56.378397 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:07:56.378414 | orchestrator | 2026-02-23 20:07:56.378431 | orchestrator | 2026-02-23 20:07:56.378449 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:07:56.378466 | orchestrator | Monday 23 February 2026 20:07:56 +0000 (0:00:00.549) 0:01:50.128 ******* 2026-02-23 20:07:56.378484 | orchestrator | =============================================================================== 2026-02-23 20:07:56.378502 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-02-23 20:07:56.378520 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.39s 2026-02-23 20:07:56.378537 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.69s 2026-02-23 20:07:56.378554 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.60s 2026-02-23 20:07:56.378572 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-02-23 20:07:56.378590 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.02s 2026-02-23 20:07:56.378607 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-02-23 20:07:56.378625 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.55s 2026-02-23 20:07:56.378641 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-02-23 20:07:56.378658 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-02-23 20:07:56.378675 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-23 20:07:56.565010 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 20:07:56.565125 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-02-23 20:07:56.567894 | orchestrator | + set -e 2026-02-23 20:07:56.567941 | orchestrator | + NAMESPACE=kolla 2026-02-23 20:07:56.567953 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-23 20:07:56.571544 | orchestrator | ++ semver latest 9.0.0 2026-02-23 20:07:56.613122 | orchestrator | + [[ -1 -lt 0 ]] 2026-02-23 20:07:56.613205 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 20:07:56.613544 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-23 20:08:08.389411 | orchestrator | 2026-02-23 20:08:08 | INFO  | Prepare task for execution of operator. 2026-02-23 20:08:08.452662 | orchestrator | 2026-02-23 20:08:08 | INFO  | Task 34715c66-b33a-4734-abdc-93bac36562fc (operator) was prepared for execution. 2026-02-23 20:08:08.452752 | orchestrator | 2026-02-23 20:08:08 | INFO  | It takes a moment until task 34715c66-b33a-4734-abdc-93bac36562fc (operator) has been started and output is visible here. 2026-02-23 20:08:23.554632 | orchestrator | 2026-02-23 20:08:23.554726 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-23 20:08:23.554739 | orchestrator | 2026-02-23 20:08:23.554749 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 20:08:23.554758 | orchestrator | Monday 23 February 2026 20:08:12 +0000 (0:00:00.106) 0:00:00.106 ******* 2026-02-23 20:08:23.554767 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:08:23.554778 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:08:23.554787 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:08:23.554796 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:08:23.554805 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:08:23.554813 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:08:23.554826 | orchestrator | 2026-02-23 20:08:23.554835 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-23 20:08:23.554866 | orchestrator | Monday 23 February 2026 20:08:15 +0000 (0:00:03.344) 0:00:03.450 ******* 2026-02-23 20:08:23.554876 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:08:23.554885 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:08:23.554893 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:08:23.554902 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:08:23.554911 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:08:23.554919 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:08:23.554928 | orchestrator | 2026-02-23 20:08:23.554987 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-23 20:08:23.555004 | orchestrator | 2026-02-23 20:08:23.555018 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-23 20:08:23.555030 | orchestrator | Monday 23 February 2026 20:08:16 +0000 (0:00:00.713) 0:00:04.164 ******* 2026-02-23 20:08:23.555039 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:08:23.555047 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:08:23.555056 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:08:23.555065 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:08:23.555073 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:08:23.555081 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:08:23.555090 | orchestrator | 2026-02-23 20:08:23.555099 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-23 20:08:23.555124 | orchestrator | Monday 23 February 2026 20:08:16 +0000 (0:00:00.142) 0:00:04.306 ******* 2026-02-23 20:08:23.555137 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:08:23.555146 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:08:23.555154 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:08:23.555163 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:08:23.555171 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:08:23.555181 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:08:23.555191 | orchestrator | 2026-02-23 20:08:23.555200 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-23 20:08:23.555210 | orchestrator | Monday 23 February 2026 20:08:16 +0000 (0:00:00.144) 0:00:04.451 ******* 2026-02-23 20:08:23.555221 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:08:23.555231 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:08:23.555241 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:08:23.555252 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:08:23.555262 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:08:23.555271 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:08:23.555281 | orchestrator | 2026-02-23 20:08:23.555291 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-23 20:08:23.555301 | orchestrator | Monday 23 February 2026 20:08:16 +0000 (0:00:00.601) 0:00:05.053 ******* 2026-02-23 20:08:23.555310 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:08:23.555320 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:08:23.555329 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:08:23.555339 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:08:23.555349 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:08:23.555359 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:08:23.555369 | orchestrator | 2026-02-23 20:08:23.555379 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-23 20:08:23.555388 | orchestrator | Monday 23 February 2026 20:08:17 +0000 (0:00:00.796) 0:00:05.849 ******* 2026-02-23 20:08:23.555398 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-23 20:08:23.555409 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-23 20:08:23.555418 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-23 20:08:23.555428 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-23 20:08:23.555437 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-23 20:08:23.555447 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-23 20:08:23.555457 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-23 20:08:23.555467 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-23 20:08:23.555476 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-23 20:08:23.555494 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-23 20:08:23.555504 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-23 20:08:23.555514 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-23 20:08:23.555524 | orchestrator | 2026-02-23 20:08:23.555534 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-23 20:08:23.555542 | orchestrator | Monday 23 February 2026 20:08:18 +0000 (0:00:01.161) 0:00:07.011 ******* 2026-02-23 20:08:23.555551 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:08:23.555559 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:08:23.555568 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:08:23.555576 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:08:23.555585 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:08:23.555593 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:08:23.555602 | orchestrator | 2026-02-23 20:08:23.555610 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-23 20:08:23.555620 | orchestrator | Monday 23 February 2026 20:08:20 +0000 (0:00:01.177) 0:00:08.188 ******* 2026-02-23 20:08:23.555628 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 20:08:23.555637 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 20:08:23.555646 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 20:08:23.555654 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 20:08:23.555663 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 20:08:23.555689 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-23 20:08:23.555698 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-23 20:08:23.555707 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-23 20:08:23.555716 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-23 20:08:23.555724 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-23 20:08:23.555733 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-23 20:08:23.555741 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-23 20:08:23.555750 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-23 20:08:23.555758 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-23 20:08:23.555767 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-23 20:08:23.555775 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-23 20:08:23.555784 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-23 20:08:23.555792 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-23 20:08:23.555801 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-23 20:08:23.555810 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-23 20:08:23.555818 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-23 20:08:23.555826 | orchestrator | 2026-02-23 20:08:23.555835 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-23 20:08:23.555844 | orchestrator | Monday 23 February 2026 20:08:21 +0000 (0:00:01.241) 0:00:09.430 ******* 2026-02-23 20:08:23.555853 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:23.555861 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:23.555870 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:23.555878 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:23.555887 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:23.555895 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:23.555904 | orchestrator | 2026-02-23 20:08:23.555912 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-23 20:08:23.555927 | orchestrator | Monday 23 February 2026 20:08:21 +0000 (0:00:00.188) 0:00:09.619 ******* 2026-02-23 20:08:23.555956 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:23.555966 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:23.555975 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:23.555983 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:23.555993 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:23.556008 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:23.556023 | orchestrator | 2026-02-23 20:08:23.556038 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-23 20:08:23.556054 | orchestrator | Monday 23 February 2026 20:08:21 +0000 (0:00:00.206) 0:00:09.826 ******* 2026-02-23 20:08:23.556063 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:08:23.556071 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:08:23.556080 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:08:23.556088 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:08:23.556097 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:08:23.556106 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:08:23.556114 | orchestrator | 2026-02-23 20:08:23.556123 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-23 20:08:23.556131 | orchestrator | Monday 23 February 2026 20:08:22 +0000 (0:00:00.573) 0:00:10.399 ******* 2026-02-23 20:08:23.556140 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:23.556148 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:23.556157 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:23.556165 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:23.556174 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:23.556182 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:23.556191 | orchestrator | 2026-02-23 20:08:23.556199 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-23 20:08:23.556799 | orchestrator | Monday 23 February 2026 20:08:22 +0000 (0:00:00.219) 0:00:10.618 ******* 2026-02-23 20:08:23.556826 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:08:23.556836 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:08:23.556844 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:08:23.556854 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:08:23.556863 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:08:23.556871 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:08:23.556880 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-23 20:08:23.556889 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:08:23.556897 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-23 20:08:23.556906 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:08:23.556914 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:08:23.556923 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:08:23.556932 | orchestrator | 2026-02-23 20:08:23.556974 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-23 20:08:23.556989 | orchestrator | Monday 23 February 2026 20:08:23 +0000 (0:00:00.720) 0:00:11.339 ******* 2026-02-23 20:08:23.557003 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:23.557018 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:23.557032 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:23.557045 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:23.557054 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:23.557062 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:23.557071 | orchestrator | 2026-02-23 20:08:23.557080 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-23 20:08:23.557088 | orchestrator | Monday 23 February 2026 20:08:23 +0000 (0:00:00.142) 0:00:11.481 ******* 2026-02-23 20:08:23.557097 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:23.557105 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:23.557114 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:23.557122 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:23.557158 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:24.793124 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:24.793200 | orchestrator | 2026-02-23 20:08:24.793208 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-23 20:08:24.793216 | orchestrator | Monday 23 February 2026 20:08:23 +0000 (0:00:00.144) 0:00:11.625 ******* 2026-02-23 20:08:24.793221 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:24.793226 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:24.793232 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:24.793238 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:24.793243 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:24.793248 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:24.793253 | orchestrator | 2026-02-23 20:08:24.793258 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-23 20:08:24.793264 | orchestrator | Monday 23 February 2026 20:08:23 +0000 (0:00:00.148) 0:00:11.774 ******* 2026-02-23 20:08:24.793269 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:08:24.793274 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:08:24.793279 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:08:24.793284 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:08:24.793289 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:08:24.793294 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:08:24.793300 | orchestrator | 2026-02-23 20:08:24.793305 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-23 20:08:24.793310 | orchestrator | Monday 23 February 2026 20:08:24 +0000 (0:00:00.637) 0:00:12.411 ******* 2026-02-23 20:08:24.793315 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:08:24.793320 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:08:24.793325 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:08:24.793330 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:08:24.793335 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:08:24.793340 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:08:24.793345 | orchestrator | 2026-02-23 20:08:24.793350 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:08:24.793374 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:08:24.793381 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:08:24.793386 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:08:24.793391 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:08:24.793397 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:08:24.793403 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:08:24.793412 | orchestrator | 2026-02-23 20:08:24.793420 | orchestrator | 2026-02-23 20:08:24.793429 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:08:24.793437 | orchestrator | Monday 23 February 2026 20:08:24 +0000 (0:00:00.219) 0:00:12.631 ******* 2026-02-23 20:08:24.793445 | orchestrator | =============================================================================== 2026-02-23 20:08:24.793453 | orchestrator | Gathering Facts --------------------------------------------------------- 3.34s 2026-02-23 20:08:24.793460 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2026-02-23 20:08:24.793470 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-02-23 20:08:24.793496 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-02-23 20:08:24.793504 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-02-23 20:08:24.793512 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-02-23 20:08:24.793520 | orchestrator | Do not require tty for all users ---------------------------------------- 0.71s 2026-02-23 20:08:24.793528 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-02-23 20:08:24.793536 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2026-02-23 20:08:24.793544 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-02-23 20:08:24.793552 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-02-23 20:08:24.793560 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2026-02-23 20:08:24.793569 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-02-23 20:08:24.793576 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-02-23 20:08:24.793581 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-02-23 20:08:24.793586 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-02-23 20:08:24.793591 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-02-23 20:08:24.793596 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-02-23 20:08:24.793601 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-02-23 20:08:25.063101 | orchestrator | + osism apply --environment custom facts 2026-02-23 20:08:27.008741 | orchestrator | 2026-02-23 20:08:27 | INFO  | Trying to run play facts in environment custom 2026-02-23 20:08:37.136606 | orchestrator | 2026-02-23 20:08:37 | INFO  | Prepare task for execution of facts. 2026-02-23 20:08:37.207624 | orchestrator | 2026-02-23 20:08:37 | INFO  | Task e9dee0a3-e665-4bc2-9c6e-07a8e5484805 (facts) was prepared for execution. 2026-02-23 20:08:37.207719 | orchestrator | 2026-02-23 20:08:37 | INFO  | It takes a moment until task e9dee0a3-e665-4bc2-9c6e-07a8e5484805 (facts) has been started and output is visible here. 2026-02-23 20:09:21.340528 | orchestrator | 2026-02-23 20:09:21.340639 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-23 20:09:21.340655 | orchestrator | 2026-02-23 20:09:21.340667 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-23 20:09:21.340678 | orchestrator | Monday 23 February 2026 20:08:40 +0000 (0:00:00.049) 0:00:00.049 ******* 2026-02-23 20:09:21.340690 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:21.340702 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:09:21.340713 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:21.340724 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:09:21.340735 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:21.340745 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:09:21.340756 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:21.340767 | orchestrator | 2026-02-23 20:09:21.340778 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-23 20:09:21.340789 | orchestrator | Monday 23 February 2026 20:08:42 +0000 (0:00:01.456) 0:00:01.505 ******* 2026-02-23 20:09:21.340799 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:21.340810 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:21.340821 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:21.340831 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:09:21.340843 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:09:21.340871 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:21.340882 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:09:21.340893 | orchestrator | 2026-02-23 20:09:21.340929 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-23 20:09:21.340940 | orchestrator | 2026-02-23 20:09:21.340951 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-23 20:09:21.340962 | orchestrator | Monday 23 February 2026 20:08:43 +0000 (0:00:01.289) 0:00:02.794 ******* 2026-02-23 20:09:21.340973 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.340984 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.340994 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.341005 | orchestrator | 2026-02-23 20:09:21.341016 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-23 20:09:21.341028 | orchestrator | Monday 23 February 2026 20:08:43 +0000 (0:00:00.086) 0:00:02.880 ******* 2026-02-23 20:09:21.341040 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.341057 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.341074 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.341092 | orchestrator | 2026-02-23 20:09:21.341110 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-23 20:09:21.341128 | orchestrator | Monday 23 February 2026 20:08:43 +0000 (0:00:00.180) 0:00:03.060 ******* 2026-02-23 20:09:21.341139 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.341149 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.341160 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.341170 | orchestrator | 2026-02-23 20:09:21.341181 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-23 20:09:21.341224 | orchestrator | Monday 23 February 2026 20:08:44 +0000 (0:00:00.178) 0:00:03.239 ******* 2026-02-23 20:09:21.341237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:09:21.341249 | orchestrator | 2026-02-23 20:09:21.341260 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-23 20:09:21.341271 | orchestrator | Monday 23 February 2026 20:08:44 +0000 (0:00:00.106) 0:00:03.345 ******* 2026-02-23 20:09:21.341290 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.341316 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.341336 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.341353 | orchestrator | 2026-02-23 20:09:21.341370 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-23 20:09:21.341388 | orchestrator | Monday 23 February 2026 20:08:44 +0000 (0:00:00.436) 0:00:03.782 ******* 2026-02-23 20:09:21.341407 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:09:21.341425 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:09:21.341443 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:09:21.341461 | orchestrator | 2026-02-23 20:09:21.341476 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-23 20:09:21.341487 | orchestrator | Monday 23 February 2026 20:08:44 +0000 (0:00:00.102) 0:00:03.884 ******* 2026-02-23 20:09:21.341497 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:21.341508 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:21.341518 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:21.341529 | orchestrator | 2026-02-23 20:09:21.341540 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-23 20:09:21.341550 | orchestrator | Monday 23 February 2026 20:08:45 +0000 (0:00:01.041) 0:00:04.926 ******* 2026-02-23 20:09:21.341561 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.341571 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.341582 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.341593 | orchestrator | 2026-02-23 20:09:21.341603 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-23 20:09:21.341614 | orchestrator | Monday 23 February 2026 20:08:46 +0000 (0:00:00.431) 0:00:05.358 ******* 2026-02-23 20:09:21.341625 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:21.341635 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:21.341646 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:21.341657 | orchestrator | 2026-02-23 20:09:21.341679 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-23 20:09:21.341690 | orchestrator | Monday 23 February 2026 20:08:47 +0000 (0:00:01.045) 0:00:06.403 ******* 2026-02-23 20:09:21.341701 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:21.341711 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:21.341722 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:21.341732 | orchestrator | 2026-02-23 20:09:21.341743 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-23 20:09:21.341754 | orchestrator | Monday 23 February 2026 20:09:03 +0000 (0:00:16.458) 0:00:22.862 ******* 2026-02-23 20:09:21.341764 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:09:21.341775 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:09:21.341786 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:09:21.341796 | orchestrator | 2026-02-23 20:09:21.341807 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-23 20:09:21.341839 | orchestrator | Monday 23 February 2026 20:09:03 +0000 (0:00:00.082) 0:00:22.944 ******* 2026-02-23 20:09:21.341850 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:21.341861 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:21.341872 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:21.341882 | orchestrator | 2026-02-23 20:09:21.341893 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-23 20:09:21.341903 | orchestrator | Monday 23 February 2026 20:09:12 +0000 (0:00:08.296) 0:00:31.241 ******* 2026-02-23 20:09:21.341914 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.341925 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.341935 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.341946 | orchestrator | 2026-02-23 20:09:21.341957 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-23 20:09:21.341967 | orchestrator | Monday 23 February 2026 20:09:12 +0000 (0:00:00.449) 0:00:31.690 ******* 2026-02-23 20:09:21.341978 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-23 20:09:21.341989 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-23 20:09:21.342000 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-23 20:09:21.342011 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-23 20:09:21.342091 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-23 20:09:21.342104 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-23 20:09:21.342114 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-23 20:09:21.342125 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-23 20:09:21.342136 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-23 20:09:21.342147 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-23 20:09:21.342157 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-23 20:09:21.342168 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-23 20:09:21.342178 | orchestrator | 2026-02-23 20:09:21.342223 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-23 20:09:21.342244 | orchestrator | Monday 23 February 2026 20:09:16 +0000 (0:00:03.575) 0:00:35.266 ******* 2026-02-23 20:09:21.342264 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.342284 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.342303 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.342315 | orchestrator | 2026-02-23 20:09:21.342325 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-23 20:09:21.342336 | orchestrator | 2026-02-23 20:09:21.342346 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 20:09:21.342401 | orchestrator | Monday 23 February 2026 20:09:17 +0000 (0:00:01.394) 0:00:36.660 ******* 2026-02-23 20:09:21.342413 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:09:21.342433 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:09:21.342444 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:09:21.342454 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:21.342465 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:21.342475 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:21.342486 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:21.342496 | orchestrator | 2026-02-23 20:09:21.342507 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:09:21.342519 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:09:21.342530 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:09:21.342543 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:09:21.342554 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:09:21.342565 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:09:21.342576 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:09:21.342587 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:09:21.342598 | orchestrator | 2026-02-23 20:09:21.342608 | orchestrator | 2026-02-23 20:09:21.342619 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:09:21.342630 | orchestrator | Monday 23 February 2026 20:09:21 +0000 (0:00:03.755) 0:00:40.416 ******* 2026-02-23 20:09:21.342641 | orchestrator | =============================================================================== 2026-02-23 20:09:21.342651 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.46s 2026-02-23 20:09:21.342662 | orchestrator | Install required packages (Debian) -------------------------------------- 8.30s 2026-02-23 20:09:21.342673 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.76s 2026-02-23 20:09:21.342683 | orchestrator | Copy fact files --------------------------------------------------------- 3.58s 2026-02-23 20:09:21.342694 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2026-02-23 20:09:21.342705 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.39s 2026-02-23 20:09:21.342725 | orchestrator | Copy fact file ---------------------------------------------------------- 1.29s 2026-02-23 20:09:21.469653 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2026-02-23 20:09:21.469747 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-02-23 20:09:21.469762 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-02-23 20:09:21.469773 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-02-23 20:09:21.469784 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2026-02-23 20:09:21.469795 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-02-23 20:09:21.469806 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-02-23 20:09:21.469817 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-02-23 20:09:21.469829 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-02-23 20:09:21.469857 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-02-23 20:09:21.469869 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-02-23 20:09:21.703824 | orchestrator | + osism apply bootstrap 2026-02-23 20:09:33.720029 | orchestrator | 2026-02-23 20:09:33 | INFO  | Prepare task for execution of bootstrap. 2026-02-23 20:09:33.825318 | orchestrator | 2026-02-23 20:09:33 | INFO  | Task 4b5cf347-4868-44ff-9e7a-788f9c6841f6 (bootstrap) was prepared for execution. 2026-02-23 20:09:33.825413 | orchestrator | 2026-02-23 20:09:33 | INFO  | It takes a moment until task 4b5cf347-4868-44ff-9e7a-788f9c6841f6 (bootstrap) has been started and output is visible here. 2026-02-23 20:09:49.579146 | orchestrator | 2026-02-23 20:09:49.579263 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-23 20:09:49.579281 | orchestrator | 2026-02-23 20:09:49.579343 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-23 20:09:49.579357 | orchestrator | Monday 23 February 2026 20:09:38 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-02-23 20:09:49.579368 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:49.579380 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:49.579390 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:49.579401 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:49.579412 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:09:49.579423 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:09:49.579434 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:09:49.579445 | orchestrator | 2026-02-23 20:09:49.579456 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-23 20:09:49.579466 | orchestrator | 2026-02-23 20:09:49.579478 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 20:09:49.579488 | orchestrator | Monday 23 February 2026 20:09:38 +0000 (0:00:00.243) 0:00:00.400 ******* 2026-02-23 20:09:49.579499 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:09:49.579511 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:09:49.579522 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:09:49.579532 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:49.579543 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:49.579554 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:49.579564 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:49.579575 | orchestrator | 2026-02-23 20:09:49.579586 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-23 20:09:49.579596 | orchestrator | 2026-02-23 20:09:49.579607 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 20:09:49.579618 | orchestrator | Monday 23 February 2026 20:09:42 +0000 (0:00:03.751) 0:00:04.152 ******* 2026-02-23 20:09:49.579630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:09:49.579641 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-23 20:09:49.579652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:09:49.579662 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-23 20:09:49.579673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:09:49.579684 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-23 20:09:49.579697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-23 20:09:49.579709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-23 20:09:49.579721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-23 20:09:49.579733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-23 20:09:49.579745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-23 20:09:49.579758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-23 20:09:49.579770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-23 20:09:49.579781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-23 20:09:49.579794 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-23 20:09:49.579806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-23 20:09:49.579842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-23 20:09:49.579855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-23 20:09:49.579867 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-23 20:09:49.579878 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-23 20:09:49.579888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-23 20:09:49.579899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-23 20:09:49.579909 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-23 20:09:49.579920 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-23 20:09:49.579930 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:09:49.579941 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:09:49.579951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-23 20:09:49.579962 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-23 20:09:49.579972 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:09:49.579983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-23 20:09:49.579993 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-23 20:09:49.580004 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-23 20:09:49.580014 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-23 20:09:49.580025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-23 20:09:49.580035 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-23 20:09:49.580046 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-23 20:09:49.580056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-23 20:09:49.580067 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-23 20:09:49.580078 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-23 20:09:49.580088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-23 20:09:49.580098 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-23 20:09:49.580109 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-23 20:09:49.580120 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:09:49.580130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-23 20:09:49.580141 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-23 20:09:49.580151 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:09:49.580162 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-23 20:09:49.580190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-23 20:09:49.580202 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-23 20:09:49.580212 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-23 20:09:49.580223 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-23 20:09:49.580233 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:09:49.580244 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-23 20:09:49.580254 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-23 20:09:49.580265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-23 20:09:49.580275 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:09:49.580286 | orchestrator | 2026-02-23 20:09:49.580314 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-23 20:09:49.580326 | orchestrator | 2026-02-23 20:09:49.580336 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-23 20:09:49.580347 | orchestrator | Monday 23 February 2026 20:09:42 +0000 (0:00:00.472) 0:00:04.624 ******* 2026-02-23 20:09:49.580358 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:49.580368 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:49.580387 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:09:49.580398 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:09:49.580408 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:49.580419 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:09:49.580429 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:49.580440 | orchestrator | 2026-02-23 20:09:49.580450 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-23 20:09:49.580461 | orchestrator | Monday 23 February 2026 20:09:43 +0000 (0:00:01.201) 0:00:05.825 ******* 2026-02-23 20:09:49.580472 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:49.580482 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:09:49.580492 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:09:49.580503 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:09:49.580513 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:09:49.580523 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:09:49.580534 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:09:49.580544 | orchestrator | 2026-02-23 20:09:49.580555 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-23 20:09:49.580566 | orchestrator | Monday 23 February 2026 20:09:44 +0000 (0:00:01.221) 0:00:07.046 ******* 2026-02-23 20:09:49.580578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:09:49.580591 | orchestrator | 2026-02-23 20:09:49.580602 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-23 20:09:49.580612 | orchestrator | Monday 23 February 2026 20:09:45 +0000 (0:00:00.256) 0:00:07.303 ******* 2026-02-23 20:09:49.580623 | orchestrator | changed: [testbed-manager] 2026-02-23 20:09:49.580634 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:09:49.580644 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:49.580655 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:49.580665 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:09:49.580675 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:49.580686 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:09:49.580696 | orchestrator | 2026-02-23 20:09:49.580707 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-23 20:09:49.580717 | orchestrator | Monday 23 February 2026 20:09:47 +0000 (0:00:02.084) 0:00:09.387 ******* 2026-02-23 20:09:49.580728 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:09:49.580740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:09:49.580752 | orchestrator | 2026-02-23 20:09:49.580762 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-23 20:09:49.580792 | orchestrator | Monday 23 February 2026 20:09:47 +0000 (0:00:00.238) 0:00:09.626 ******* 2026-02-23 20:09:49.580804 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:49.580814 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:09:49.580825 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:49.580835 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:09:49.580846 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:09:49.580856 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:49.580867 | orchestrator | 2026-02-23 20:09:49.580877 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-23 20:09:49.580888 | orchestrator | Monday 23 February 2026 20:09:48 +0000 (0:00:01.056) 0:00:10.682 ******* 2026-02-23 20:09:49.580898 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:09:49.580909 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:09:49.580919 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:09:49.580930 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:09:49.580940 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:09:49.580951 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:09:49.580968 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:09:49.580978 | orchestrator | 2026-02-23 20:09:49.580989 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-23 20:09:49.581005 | orchestrator | Monday 23 February 2026 20:09:49 +0000 (0:00:00.521) 0:00:11.204 ******* 2026-02-23 20:09:49.581016 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:09:49.581027 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:09:49.581037 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:09:49.581047 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:09:49.581058 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:09:49.581068 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:09:49.581079 | orchestrator | ok: [testbed-manager] 2026-02-23 20:09:49.581089 | orchestrator | 2026-02-23 20:09:49.581100 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-23 20:09:49.581111 | orchestrator | Monday 23 February 2026 20:09:49 +0000 (0:00:00.406) 0:00:11.610 ******* 2026-02-23 20:09:49.581122 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:09:49.581133 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:09:49.581151 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:10:02.547838 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:10:02.547939 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:10:02.547952 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:10:02.547962 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:10:02.547972 | orchestrator | 2026-02-23 20:10:02.547982 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-23 20:10:02.547993 | orchestrator | Monday 23 February 2026 20:09:49 +0000 (0:00:00.165) 0:00:11.776 ******* 2026-02-23 20:10:02.548004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:02.548030 | orchestrator | 2026-02-23 20:10:02.548040 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-23 20:10:02.548050 | orchestrator | Monday 23 February 2026 20:09:49 +0000 (0:00:00.240) 0:00:12.016 ******* 2026-02-23 20:10:02.548060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:02.548069 | orchestrator | 2026-02-23 20:10:02.548079 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-23 20:10:02.548088 | orchestrator | Monday 23 February 2026 20:09:50 +0000 (0:00:00.333) 0:00:12.350 ******* 2026-02-23 20:10:02.548097 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.548107 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.548116 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.548126 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.548135 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.548144 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.548153 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.548162 | orchestrator | 2026-02-23 20:10:02.548171 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-23 20:10:02.548180 | orchestrator | Monday 23 February 2026 20:09:51 +0000 (0:00:01.471) 0:00:13.822 ******* 2026-02-23 20:10:02.548190 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:10:02.548200 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:10:02.548209 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:10:02.548218 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:10:02.548227 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:10:02.548236 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:10:02.548245 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:10:02.548254 | orchestrator | 2026-02-23 20:10:02.548263 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-23 20:10:02.548294 | orchestrator | Monday 23 February 2026 20:09:51 +0000 (0:00:00.268) 0:00:14.091 ******* 2026-02-23 20:10:02.548304 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.548314 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.548323 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.548332 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.548341 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.548373 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.548383 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.548392 | orchestrator | 2026-02-23 20:10:02.548403 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-23 20:10:02.548413 | orchestrator | Monday 23 February 2026 20:09:53 +0000 (0:00:01.449) 0:00:15.540 ******* 2026-02-23 20:10:02.548423 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:10:02.548433 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:10:02.548443 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:10:02.548453 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:10:02.548462 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:10:02.548472 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:10:02.548483 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:10:02.548493 | orchestrator | 2026-02-23 20:10:02.548503 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-23 20:10:02.548514 | orchestrator | Monday 23 February 2026 20:09:53 +0000 (0:00:00.213) 0:00:15.754 ******* 2026-02-23 20:10:02.548524 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:02.548534 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.548543 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:02.548553 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:02.548563 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:02.548573 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:02.548582 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:02.548592 | orchestrator | 2026-02-23 20:10:02.548602 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-23 20:10:02.548612 | orchestrator | Monday 23 February 2026 20:09:54 +0000 (0:00:00.537) 0:00:16.291 ******* 2026-02-23 20:10:02.548622 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.548632 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:02.548642 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:02.548652 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:02.548662 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:02.548672 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:02.548681 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:02.548691 | orchestrator | 2026-02-23 20:10:02.548709 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-23 20:10:02.548719 | orchestrator | Monday 23 February 2026 20:09:55 +0000 (0:00:01.119) 0:00:17.410 ******* 2026-02-23 20:10:02.548730 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.548741 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.548750 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.548758 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.548767 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.548846 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.548856 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.548889 | orchestrator | 2026-02-23 20:10:02.548900 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-23 20:10:02.548909 | orchestrator | Monday 23 February 2026 20:09:56 +0000 (0:00:01.166) 0:00:18.576 ******* 2026-02-23 20:10:02.548935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:02.548944 | orchestrator | 2026-02-23 20:10:02.548953 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-23 20:10:02.548962 | orchestrator | Monday 23 February 2026 20:09:56 +0000 (0:00:00.289) 0:00:18.866 ******* 2026-02-23 20:10:02.548979 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:10:02.548988 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:02.548997 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:02.549005 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:02.549014 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:02.549023 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:02.549031 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:02.549040 | orchestrator | 2026-02-23 20:10:02.549048 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-23 20:10:02.549057 | orchestrator | Monday 23 February 2026 20:09:57 +0000 (0:00:01.245) 0:00:20.111 ******* 2026-02-23 20:10:02.549065 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549074 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549082 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549091 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549099 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.549108 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.549116 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.549125 | orchestrator | 2026-02-23 20:10:02.549133 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-23 20:10:02.549142 | orchestrator | Monday 23 February 2026 20:09:58 +0000 (0:00:00.265) 0:00:20.377 ******* 2026-02-23 20:10:02.549151 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549159 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549168 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549177 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549185 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.549193 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.549202 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.549210 | orchestrator | 2026-02-23 20:10:02.549219 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-23 20:10:02.549228 | orchestrator | Monday 23 February 2026 20:09:58 +0000 (0:00:00.221) 0:00:20.599 ******* 2026-02-23 20:10:02.549236 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549244 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549253 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549261 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549270 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.549278 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.549286 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.549295 | orchestrator | 2026-02-23 20:10:02.549304 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-23 20:10:02.549312 | orchestrator | Monday 23 February 2026 20:09:58 +0000 (0:00:00.221) 0:00:20.820 ******* 2026-02-23 20:10:02.549322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:02.549332 | orchestrator | 2026-02-23 20:10:02.549340 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-23 20:10:02.549426 | orchestrator | Monday 23 February 2026 20:09:58 +0000 (0:00:00.267) 0:00:21.087 ******* 2026-02-23 20:10:02.549435 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549444 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549452 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549461 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549469 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.549478 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.549486 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.549495 | orchestrator | 2026-02-23 20:10:02.549503 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-23 20:10:02.549512 | orchestrator | Monday 23 February 2026 20:09:59 +0000 (0:00:00.535) 0:00:21.623 ******* 2026-02-23 20:10:02.549521 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:10:02.549529 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:10:02.549543 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:10:02.549552 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:10:02.549561 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:10:02.549569 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:10:02.549578 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:10:02.549586 | orchestrator | 2026-02-23 20:10:02.549595 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-23 20:10:02.549603 | orchestrator | Monday 23 February 2026 20:09:59 +0000 (0:00:00.214) 0:00:21.837 ******* 2026-02-23 20:10:02.549612 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549620 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549629 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549637 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549646 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:02.549654 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:02.549663 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:02.549671 | orchestrator | 2026-02-23 20:10:02.549680 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-23 20:10:02.549689 | orchestrator | Monday 23 February 2026 20:10:00 +0000 (0:00:01.141) 0:00:22.979 ******* 2026-02-23 20:10:02.549698 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549706 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549715 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549723 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:02.549732 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549740 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:02.549749 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:02.549757 | orchestrator | 2026-02-23 20:10:02.549766 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-23 20:10:02.549775 | orchestrator | Monday 23 February 2026 20:10:01 +0000 (0:00:00.625) 0:00:23.604 ******* 2026-02-23 20:10:02.549784 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:02.549792 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:02.549801 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:02.549809 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:02.549825 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:43.117585 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:43.117716 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:43.117741 | orchestrator | 2026-02-23 20:10:43.117761 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-23 20:10:43.117783 | orchestrator | Monday 23 February 2026 20:10:02 +0000 (0:00:01.167) 0:00:24.771 ******* 2026-02-23 20:10:43.117802 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.117824 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.117842 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.117857 | orchestrator | changed: [testbed-manager] 2026-02-23 20:10:43.117868 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:43.117879 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:43.117890 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:43.117901 | orchestrator | 2026-02-23 20:10:43.117912 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-23 20:10:43.117923 | orchestrator | Monday 23 February 2026 20:10:20 +0000 (0:00:17.409) 0:00:42.180 ******* 2026-02-23 20:10:43.117934 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.117946 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.117956 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.117967 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.117978 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.117989 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.117999 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.118010 | orchestrator | 2026-02-23 20:10:43.118073 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-23 20:10:43.118086 | orchestrator | Monday 23 February 2026 20:10:20 +0000 (0:00:00.210) 0:00:42.391 ******* 2026-02-23 20:10:43.118098 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.118171 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.118184 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.118197 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.118210 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.118222 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.118234 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.118246 | orchestrator | 2026-02-23 20:10:43.118259 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-23 20:10:43.118278 | orchestrator | Monday 23 February 2026 20:10:20 +0000 (0:00:00.257) 0:00:42.648 ******* 2026-02-23 20:10:43.118297 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.118315 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.118332 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.118349 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.118366 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.118384 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.118402 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.118420 | orchestrator | 2026-02-23 20:10:43.118439 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-23 20:10:43.118458 | orchestrator | Monday 23 February 2026 20:10:20 +0000 (0:00:00.237) 0:00:42.886 ******* 2026-02-23 20:10:43.118480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:43.118557 | orchestrator | 2026-02-23 20:10:43.118611 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-23 20:10:43.118629 | orchestrator | Monday 23 February 2026 20:10:21 +0000 (0:00:00.284) 0:00:43.171 ******* 2026-02-23 20:10:43.118645 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.118662 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.118678 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.118697 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.118713 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.118729 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.118746 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.118764 | orchestrator | 2026-02-23 20:10:43.118781 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-23 20:10:43.118798 | orchestrator | Monday 23 February 2026 20:10:22 +0000 (0:00:01.914) 0:00:45.085 ******* 2026-02-23 20:10:43.118816 | orchestrator | changed: [testbed-manager] 2026-02-23 20:10:43.118835 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:43.118853 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:43.118868 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:43.118884 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:43.118900 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:43.118915 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:43.118930 | orchestrator | 2026-02-23 20:10:43.118948 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-23 20:10:43.118967 | orchestrator | Monday 23 February 2026 20:10:23 +0000 (0:00:01.061) 0:00:46.146 ******* 2026-02-23 20:10:43.118985 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.119002 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.119020 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.119038 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.119056 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.119073 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.119090 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.119106 | orchestrator | 2026-02-23 20:10:43.119124 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-23 20:10:43.119142 | orchestrator | Monday 23 February 2026 20:10:24 +0000 (0:00:00.857) 0:00:47.003 ******* 2026-02-23 20:10:43.119168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:43.119204 | orchestrator | 2026-02-23 20:10:43.119224 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-23 20:10:43.119243 | orchestrator | Monday 23 February 2026 20:10:25 +0000 (0:00:00.267) 0:00:47.271 ******* 2026-02-23 20:10:43.119263 | orchestrator | changed: [testbed-manager] 2026-02-23 20:10:43.119280 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:43.119296 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:43.119313 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:43.119330 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:43.119347 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:43.119363 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:43.119380 | orchestrator | 2026-02-23 20:10:43.119426 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-23 20:10:43.119446 | orchestrator | Monday 23 February 2026 20:10:26 +0000 (0:00:01.002) 0:00:48.273 ******* 2026-02-23 20:10:43.119464 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:10:43.119481 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:10:43.119527 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:10:43.119545 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:10:43.119563 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:10:43.119581 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:10:43.119599 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:10:43.119616 | orchestrator | 2026-02-23 20:10:43.119633 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-23 20:10:43.119651 | orchestrator | Monday 23 February 2026 20:10:26 +0000 (0:00:00.245) 0:00:48.518 ******* 2026-02-23 20:10:43.119669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:43.119687 | orchestrator | 2026-02-23 20:10:43.119705 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-23 20:10:43.119722 | orchestrator | Monday 23 February 2026 20:10:26 +0000 (0:00:00.310) 0:00:48.829 ******* 2026-02-23 20:10:43.119739 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.119758 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.119775 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.119794 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.119811 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.119828 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.119848 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.119866 | orchestrator | 2026-02-23 20:10:43.119884 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-23 20:10:43.119902 | orchestrator | Monday 23 February 2026 20:10:28 +0000 (0:00:01.832) 0:00:50.662 ******* 2026-02-23 20:10:43.119914 | orchestrator | changed: [testbed-manager] 2026-02-23 20:10:43.119924 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:43.119935 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:43.119945 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:43.119956 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:43.119991 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:43.120002 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:43.120012 | orchestrator | 2026-02-23 20:10:43.120023 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-23 20:10:43.120034 | orchestrator | Monday 23 February 2026 20:10:29 +0000 (0:00:01.071) 0:00:51.734 ******* 2026-02-23 20:10:43.120044 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:10:43.120055 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:10:43.120071 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:10:43.120089 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:10:43.120106 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:10:43.120124 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:10:43.120158 | orchestrator | changed: [testbed-manager] 2026-02-23 20:10:43.120176 | orchestrator | 2026-02-23 20:10:43.120194 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-23 20:10:43.120235 | orchestrator | Monday 23 February 2026 20:10:40 +0000 (0:00:11.380) 0:01:03.115 ******* 2026-02-23 20:10:43.120253 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.120271 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.120289 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.120307 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.120324 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.120341 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.120357 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.120375 | orchestrator | 2026-02-23 20:10:43.120393 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-23 20:10:43.120411 | orchestrator | Monday 23 February 2026 20:10:41 +0000 (0:00:00.680) 0:01:03.795 ******* 2026-02-23 20:10:43.120429 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.120448 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.120466 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.120484 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.120563 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.120582 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.120599 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.120618 | orchestrator | 2026-02-23 20:10:43.120635 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-23 20:10:43.120653 | orchestrator | Monday 23 February 2026 20:10:42 +0000 (0:00:00.846) 0:01:04.642 ******* 2026-02-23 20:10:43.120671 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.120688 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.120707 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.120724 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.120742 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.120759 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.120777 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.120794 | orchestrator | 2026-02-23 20:10:43.120812 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-23 20:10:43.120833 | orchestrator | Monday 23 February 2026 20:10:42 +0000 (0:00:00.180) 0:01:04.822 ******* 2026-02-23 20:10:43.120850 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:10:43.120869 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:10:43.120882 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:10:43.120903 | orchestrator | ok: [testbed-manager] 2026-02-23 20:10:43.120914 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:10:43.120925 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:10:43.120935 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:10:43.120946 | orchestrator | 2026-02-23 20:10:43.120957 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-23 20:10:43.120967 | orchestrator | Monday 23 February 2026 20:10:42 +0000 (0:00:00.182) 0:01:05.005 ******* 2026-02-23 20:10:43.120979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:10:43.120991 | orchestrator | 2026-02-23 20:10:43.121018 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-23 20:13:06.447723 | orchestrator | Monday 23 February 2026 20:10:43 +0000 (0:00:00.255) 0:01:05.260 ******* 2026-02-23 20:13:06.447834 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:06.447852 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.447867 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.447878 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.447890 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.447902 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.447914 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.447926 | orchestrator | 2026-02-23 20:13:06.447940 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-23 20:13:06.448041 | orchestrator | Monday 23 February 2026 20:10:44 +0000 (0:00:01.750) 0:01:07.011 ******* 2026-02-23 20:13:06.448052 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:06.448065 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:06.448076 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:06.448088 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:06.448100 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:06.448112 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:06.448125 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:06.448137 | orchestrator | 2026-02-23 20:13:06.448149 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-23 20:13:06.448159 | orchestrator | Monday 23 February 2026 20:10:45 +0000 (0:00:00.535) 0:01:07.547 ******* 2026-02-23 20:13:06.448166 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.448173 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.448181 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.448188 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:06.448195 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.448206 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.448217 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.448230 | orchestrator | 2026-02-23 20:13:06.448242 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-23 20:13:06.448253 | orchestrator | Monday 23 February 2026 20:10:45 +0000 (0:00:00.171) 0:01:07.718 ******* 2026-02-23 20:13:06.448265 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:06.448276 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.448289 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.448301 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.448312 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.448325 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.448336 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.448349 | orchestrator | 2026-02-23 20:13:06.448362 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-23 20:13:06.448374 | orchestrator | Monday 23 February 2026 20:10:46 +0000 (0:00:01.183) 0:01:08.902 ******* 2026-02-23 20:13:06.448387 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:06.448400 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:06.448412 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:06.448424 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:06.448436 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:06.448449 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:06.448461 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:06.448473 | orchestrator | 2026-02-23 20:13:06.448481 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-23 20:13:06.448488 | orchestrator | Monday 23 February 2026 20:10:48 +0000 (0:00:01.923) 0:01:10.825 ******* 2026-02-23 20:13:06.448495 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:06.448503 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.448510 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.448517 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.448524 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.448531 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.448538 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.448545 | orchestrator | 2026-02-23 20:13:06.448553 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-23 20:13:06.448560 | orchestrator | Monday 23 February 2026 20:10:51 +0000 (0:00:02.875) 0:01:13.701 ******* 2026-02-23 20:13:06.448567 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:06.448574 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.448581 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.448588 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.448594 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.448601 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.448608 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.448615 | orchestrator | 2026-02-23 20:13:06.448623 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-23 20:13:06.448639 | orchestrator | Monday 23 February 2026 20:11:27 +0000 (0:00:35.685) 0:01:49.387 ******* 2026-02-23 20:13:06.448646 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:06.448654 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:06.448661 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:06.448667 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:06.448674 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:06.448681 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:06.448688 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:06.448695 | orchestrator | 2026-02-23 20:13:06.448702 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-23 20:13:06.448709 | orchestrator | Monday 23 February 2026 20:12:52 +0000 (0:01:24.942) 0:03:14.329 ******* 2026-02-23 20:13:06.448716 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:06.448723 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.448731 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.448738 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.448745 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.448753 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.448760 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.448767 | orchestrator | 2026-02-23 20:13:06.448774 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-23 20:13:06.448782 | orchestrator | Monday 23 February 2026 20:12:54 +0000 (0:00:02.315) 0:03:16.644 ******* 2026-02-23 20:13:06.448789 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:06.448796 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:06.448803 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:06.448810 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:06.448816 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:06.448823 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:06.448830 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:06.448841 | orchestrator | 2026-02-23 20:13:06.448853 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-23 20:13:06.448864 | orchestrator | Monday 23 February 2026 20:13:05 +0000 (0:00:10.755) 0:03:27.400 ******* 2026-02-23 20:13:06.448918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-23 20:13:06.448939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-23 20:13:06.449001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-23 20:13:06.449017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-23 20:13:06.449033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-23 20:13:06.449041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-23 20:13:06.449051 | orchestrator | 2026-02-23 20:13:06.449059 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-23 20:13:06.449067 | orchestrator | Monday 23 February 2026 20:13:05 +0000 (0:00:00.377) 0:03:27.777 ******* 2026-02-23 20:13:06.449074 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-23 20:13:06.449082 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:13:06.449092 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-23 20:13:06.449104 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-23 20:13:06.449116 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:13:06.449140 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-23 20:13:06.449153 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:13:06.449165 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:06.449175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-23 20:13:06.449182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-23 20:13:06.449190 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-23 20:13:06.449197 | orchestrator | 2026-02-23 20:13:06.449204 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-23 20:13:06.449215 | orchestrator | Monday 23 February 2026 20:13:06 +0000 (0:00:00.749) 0:03:28.526 ******* 2026-02-23 20:13:06.449223 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-23 20:13:06.449232 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-23 20:13:06.449239 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-23 20:13:06.449246 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-23 20:13:06.449253 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-23 20:13:06.449268 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-23 20:13:13.272454 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-23 20:13:13.272568 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-23 20:13:13.272585 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-23 20:13:13.272599 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-23 20:13:13.272611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-23 20:13:13.272622 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-23 20:13:13.272633 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-23 20:13:13.272666 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-23 20:13:13.272677 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-23 20:13:13.272688 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-23 20:13:13.272699 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-23 20:13:13.272710 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-23 20:13:13.272721 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-23 20:13:13.272732 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-23 20:13:13.272742 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-23 20:13:13.272753 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-23 20:13:13.272764 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-23 20:13:13.272775 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-23 20:13:13.272785 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-23 20:13:13.272796 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-23 20:13:13.272806 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-23 20:13:13.272818 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:13:13.272830 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-23 20:13:13.272840 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-23 20:13:13.272851 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-23 20:13:13.272862 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-23 20:13:13.272872 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-23 20:13:13.272883 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:13:13.272894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-23 20:13:13.272904 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-23 20:13:13.272916 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-23 20:13:13.272929 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-23 20:13:13.272941 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-23 20:13:13.272953 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-23 20:13:13.272965 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-23 20:13:13.273085 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-23 20:13:13.273098 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:13:13.273112 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:13.273125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-23 20:13:13.273220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-23 20:13:13.273232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-23 20:13:13.273254 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-23 20:13:13.273266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-23 20:13:13.273298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-23 20:13:13.273310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-23 20:13:13.273320 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-23 20:13:13.273331 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-23 20:13:13.273342 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-23 20:13:13.273353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-23 20:13:13.273363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-23 20:13:13.273374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-23 20:13:13.273385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-23 20:13:13.273395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-23 20:13:13.273406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-23 20:13:13.273417 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-23 20:13:13.273427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-23 20:13:13.273438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-23 20:13:13.273449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-23 20:13:13.273459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-23 20:13:13.273470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-23 20:13:13.273480 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-23 20:13:13.273491 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-23 20:13:13.273502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-23 20:13:13.273512 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-23 20:13:13.273523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-23 20:13:13.273534 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-23 20:13:13.273544 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-23 20:13:13.273555 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-23 20:13:13.273566 | orchestrator | 2026-02-23 20:13:13.273577 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-23 20:13:13.273588 | orchestrator | Monday 23 February 2026 20:13:12 +0000 (0:00:05.901) 0:03:34.428 ******* 2026-02-23 20:13:13.273599 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273610 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273620 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273631 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273659 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273670 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-23 20:13:13.273681 | orchestrator | 2026-02-23 20:13:13.273692 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-23 20:13:13.273702 | orchestrator | Monday 23 February 2026 20:13:12 +0000 (0:00:00.572) 0:03:35.000 ******* 2026-02-23 20:13:13.273713 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:13.273730 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:13.273742 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:13.273753 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:13.273763 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:13:13.273774 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:13:13.273785 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:13.273796 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:13:13.273806 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-23 20:13:13.273817 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-23 20:13:13.273841 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-23 20:13:27.237447 | orchestrator | 2026-02-23 20:13:27.237543 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-23 20:13:27.237564 | orchestrator | Monday 23 February 2026 20:13:13 +0000 (0:00:00.440) 0:03:35.441 ******* 2026-02-23 20:13:27.237579 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:27.237586 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:27.237593 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:13:27.237600 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:27.237605 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:13:27.237611 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-23 20:13:27.237617 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:13:27.237622 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:27.237628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-23 20:13:27.237634 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-23 20:13:27.237639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-23 20:13:27.237645 | orchestrator | 2026-02-23 20:13:27.237651 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-23 20:13:27.237656 | orchestrator | Monday 23 February 2026 20:13:13 +0000 (0:00:00.562) 0:03:36.004 ******* 2026-02-23 20:13:27.237662 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-23 20:13:27.237667 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:27.237673 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-23 20:13:27.237678 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:13:27.237684 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-23 20:13:27.237707 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:13:27.237713 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-23 20:13:27.237718 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:13:27.237724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-23 20:13:27.237729 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-23 20:13:27.237735 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-23 20:13:27.237740 | orchestrator | 2026-02-23 20:13:27.237746 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-23 20:13:27.237751 | orchestrator | Monday 23 February 2026 20:13:15 +0000 (0:00:01.514) 0:03:37.518 ******* 2026-02-23 20:13:27.237757 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:13:27.237762 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:13:27.237768 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:13:27.237773 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:27.237779 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:13:27.237784 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:13:27.237789 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:13:27.237795 | orchestrator | 2026-02-23 20:13:27.237800 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-23 20:13:27.237806 | orchestrator | Monday 23 February 2026 20:13:15 +0000 (0:00:00.286) 0:03:37.805 ******* 2026-02-23 20:13:27.237811 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:27.237817 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:27.237823 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:27.237828 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:27.237834 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:27.237839 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:27.237845 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:27.237850 | orchestrator | 2026-02-23 20:13:27.237856 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-23 20:13:27.237861 | orchestrator | Monday 23 February 2026 20:13:21 +0000 (0:00:05.699) 0:03:43.504 ******* 2026-02-23 20:13:27.237867 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-23 20:13:27.237872 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:13:27.237878 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-23 20:13:27.237883 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:13:27.237889 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-23 20:13:27.237895 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:13:27.237901 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-23 20:13:27.237911 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:27.237918 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-23 20:13:27.237933 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-23 20:13:27.237943 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:13:27.237951 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:13:27.237960 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-23 20:13:27.237968 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:13:27.237976 | orchestrator | 2026-02-23 20:13:27.237986 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-23 20:13:27.237994 | orchestrator | Monday 23 February 2026 20:13:21 +0000 (0:00:00.319) 0:03:43.823 ******* 2026-02-23 20:13:27.238003 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-23 20:13:27.238112 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-23 20:13:27.238125 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-23 20:13:27.238152 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-23 20:13:27.238161 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-23 20:13:27.238169 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-23 20:13:27.238187 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-23 20:13:27.238197 | orchestrator | 2026-02-23 20:13:27.238206 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-23 20:13:27.238215 | orchestrator | Monday 23 February 2026 20:13:22 +0000 (0:00:01.138) 0:03:44.962 ******* 2026-02-23 20:13:27.238226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:13:27.238237 | orchestrator | 2026-02-23 20:13:27.238247 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-23 20:13:27.238255 | orchestrator | Monday 23 February 2026 20:13:23 +0000 (0:00:00.377) 0:03:45.339 ******* 2026-02-23 20:13:27.238264 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:27.238273 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:27.238281 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:27.238290 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:27.238299 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:27.238309 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:27.238318 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:27.238327 | orchestrator | 2026-02-23 20:13:27.238336 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-23 20:13:27.238345 | orchestrator | Monday 23 February 2026 20:13:24 +0000 (0:00:01.598) 0:03:46.937 ******* 2026-02-23 20:13:27.238354 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:27.238364 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:27.238374 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:27.238383 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:27.238391 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:27.238417 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:27.238426 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:27.238433 | orchestrator | 2026-02-23 20:13:27.238441 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-23 20:13:27.238450 | orchestrator | Monday 23 February 2026 20:13:25 +0000 (0:00:00.600) 0:03:47.538 ******* 2026-02-23 20:13:27.238458 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:27.238466 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:27.238474 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:27.238483 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:27.238491 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:27.238500 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:27.238509 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:27.238517 | orchestrator | 2026-02-23 20:13:27.238527 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-23 20:13:27.238537 | orchestrator | Monday 23 February 2026 20:13:26 +0000 (0:00:00.643) 0:03:48.181 ******* 2026-02-23 20:13:27.238546 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:27.238554 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:27.238562 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:27.238570 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:27.238645 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:27.238656 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:27.238665 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:27.238674 | orchestrator | 2026-02-23 20:13:27.238683 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-23 20:13:27.238693 | orchestrator | Monday 23 February 2026 20:13:26 +0000 (0:00:00.592) 0:03:48.774 ******* 2026-02-23 20:13:27.238706 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876197.5433855, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:27.238733 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876221.299241, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:27.238744 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876217.4848077, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:27.238776 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876218.1211634, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782198 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876204.8102095, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782335 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876214.483198, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782361 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771876214.00951, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782379 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782422 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782456 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782472 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782513 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782528 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782538 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:13:32.782548 | orchestrator | 2026-02-23 20:13:32.782559 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-23 20:13:32.782569 | orchestrator | Monday 23 February 2026 20:13:27 +0000 (0:00:01.104) 0:03:49.878 ******* 2026-02-23 20:13:32.782578 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:32.782588 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:32.782597 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:32.782616 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:32.782626 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:32.782635 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:32.782645 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:32.782655 | orchestrator | 2026-02-23 20:13:32.782670 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-23 20:13:32.782685 | orchestrator | Monday 23 February 2026 20:13:28 +0000 (0:00:01.134) 0:03:51.013 ******* 2026-02-23 20:13:32.782699 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:32.782714 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:32.782728 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:32.782745 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:32.782761 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:32.782775 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:32.782789 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:32.782800 | orchestrator | 2026-02-23 20:13:32.782809 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-23 20:13:32.782819 | orchestrator | Monday 23 February 2026 20:13:30 +0000 (0:00:01.232) 0:03:52.246 ******* 2026-02-23 20:13:32.782829 | orchestrator | changed: [testbed-manager] 2026-02-23 20:13:32.782839 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:13:32.782848 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:13:32.782857 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:13:32.782865 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:13:32.782874 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:13:32.782882 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:13:32.782890 | orchestrator | 2026-02-23 20:13:32.782899 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-23 20:13:32.782913 | orchestrator | Monday 23 February 2026 20:13:31 +0000 (0:00:01.190) 0:03:53.436 ******* 2026-02-23 20:13:32.782923 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:13:32.782932 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:13:32.782940 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:13:32.782948 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:13:32.782957 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:13:32.782967 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:13:32.782981 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:13:32.782994 | orchestrator | 2026-02-23 20:13:32.783008 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-23 20:13:32.783054 | orchestrator | Monday 23 February 2026 20:13:31 +0000 (0:00:00.261) 0:03:53.698 ******* 2026-02-23 20:13:32.783070 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:13:32.783087 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:13:32.783102 | orchestrator | ok: [testbed-manager] 2026-02-23 20:13:32.783117 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:13:32.783132 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:13:32.783146 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:13:32.783161 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:13:32.783175 | orchestrator | 2026-02-23 20:13:32.783189 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-23 20:13:32.783204 | orchestrator | Monday 23 February 2026 20:13:32 +0000 (0:00:00.827) 0:03:54.526 ******* 2026-02-23 20:13:32.783221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:13:32.783238 | orchestrator | 2026-02-23 20:13:32.783252 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-23 20:13:32.783280 | orchestrator | Monday 23 February 2026 20:13:32 +0000 (0:00:00.398) 0:03:54.925 ******* 2026-02-23 20:14:56.939019 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.939150 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:14:56.939175 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:14:56.939194 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:14:56.939273 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:14:56.939290 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:14:56.939306 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:14:56.939324 | orchestrator | 2026-02-23 20:14:56.939343 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-23 20:14:56.939361 | orchestrator | Monday 23 February 2026 20:13:43 +0000 (0:00:10.246) 0:04:05.171 ******* 2026-02-23 20:14:56.939378 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.939396 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.939411 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.939427 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.939446 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.939464 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.939481 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.939499 | orchestrator | 2026-02-23 20:14:56.939517 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-23 20:14:56.939536 | orchestrator | Monday 23 February 2026 20:13:44 +0000 (0:00:01.468) 0:04:06.639 ******* 2026-02-23 20:14:56.939556 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.939575 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.939591 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.939607 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.939625 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.939643 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.939663 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.939680 | orchestrator | 2026-02-23 20:14:56.939699 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-23 20:14:56.939718 | orchestrator | Monday 23 February 2026 20:13:46 +0000 (0:00:01.933) 0:04:08.572 ******* 2026-02-23 20:14:56.939736 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.939754 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.939772 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.939790 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.939808 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.939826 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.939844 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.939862 | orchestrator | 2026-02-23 20:14:56.939880 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-23 20:14:56.939899 | orchestrator | Monday 23 February 2026 20:13:46 +0000 (0:00:00.241) 0:04:08.813 ******* 2026-02-23 20:14:56.939917 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.939935 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.939954 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.939970 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.939988 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.940007 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.940025 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.940042 | orchestrator | 2026-02-23 20:14:56.940060 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-23 20:14:56.940079 | orchestrator | Monday 23 February 2026 20:13:46 +0000 (0:00:00.225) 0:04:09.039 ******* 2026-02-23 20:14:56.940096 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.940115 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.940133 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.940151 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.940170 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.940187 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.940248 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.940267 | orchestrator | 2026-02-23 20:14:56.940285 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-23 20:14:56.940303 | orchestrator | Monday 23 February 2026 20:13:47 +0000 (0:00:00.225) 0:04:09.264 ******* 2026-02-23 20:14:56.940323 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.940342 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.940360 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.940392 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.940411 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.940430 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.940447 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.940465 | orchestrator | 2026-02-23 20:14:56.940483 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-23 20:14:56.940502 | orchestrator | Monday 23 February 2026 20:13:52 +0000 (0:00:04.965) 0:04:14.230 ******* 2026-02-23 20:14:56.940522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:14:56.940542 | orchestrator | 2026-02-23 20:14:56.940559 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-23 20:14:56.940576 | orchestrator | Monday 23 February 2026 20:13:52 +0000 (0:00:00.381) 0:04:14.611 ******* 2026-02-23 20:14:56.940592 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940609 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-23 20:14:56.940626 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940643 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:14:56.940660 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-23 20:14:56.940677 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940695 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-23 20:14:56.940711 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:14:56.940728 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940744 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:14:56.940761 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-23 20:14:56.940777 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940794 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-23 20:14:56.940811 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:14:56.940827 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:14:56.940845 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940890 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-23 20:14:56.940908 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:14:56.940926 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-23 20:14:56.940945 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-23 20:14:56.940961 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:14:56.940979 | orchestrator | 2026-02-23 20:14:56.940997 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-23 20:14:56.941015 | orchestrator | Monday 23 February 2026 20:13:52 +0000 (0:00:00.315) 0:04:14.926 ******* 2026-02-23 20:14:56.941033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:14:56.941051 | orchestrator | 2026-02-23 20:14:56.941067 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-23 20:14:56.941083 | orchestrator | Monday 23 February 2026 20:13:53 +0000 (0:00:00.326) 0:04:15.253 ******* 2026-02-23 20:14:56.941100 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-23 20:14:56.941117 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-23 20:14:56.941135 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:14:56.941152 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:14:56.941189 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-23 20:14:56.941292 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-23 20:14:56.941328 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:14:56.941347 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-23 20:14:56.941365 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:14:56.941379 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-23 20:14:56.941390 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:14:56.941401 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:14:56.941412 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-23 20:14:56.941422 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:14:56.941433 | orchestrator | 2026-02-23 20:14:56.941443 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-23 20:14:56.941454 | orchestrator | Monday 23 February 2026 20:13:53 +0000 (0:00:00.306) 0:04:15.560 ******* 2026-02-23 20:14:56.941465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:14:56.941476 | orchestrator | 2026-02-23 20:14:56.941486 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-23 20:14:56.941497 | orchestrator | Monday 23 February 2026 20:13:53 +0000 (0:00:00.327) 0:04:15.888 ******* 2026-02-23 20:14:56.941508 | orchestrator | changed: [testbed-manager] 2026-02-23 20:14:56.941518 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:14:56.941529 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:14:56.941539 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:14:56.941549 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:14:56.941560 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:14:56.941571 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:14:56.941581 | orchestrator | 2026-02-23 20:14:56.941592 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-23 20:14:56.941605 | orchestrator | Monday 23 February 2026 20:14:29 +0000 (0:00:35.553) 0:04:51.441 ******* 2026-02-23 20:14:56.941618 | orchestrator | changed: [testbed-manager] 2026-02-23 20:14:56.941629 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:14:56.941641 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:14:56.941653 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:14:56.941665 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:14:56.941677 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:14:56.941695 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:14:56.941707 | orchestrator | 2026-02-23 20:14:56.941720 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-23 20:14:56.941732 | orchestrator | Monday 23 February 2026 20:14:38 +0000 (0:00:09.656) 0:05:01.098 ******* 2026-02-23 20:14:56.941744 | orchestrator | changed: [testbed-manager] 2026-02-23 20:14:56.941757 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:14:56.941769 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:14:56.941781 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:14:56.941792 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:14:56.941804 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:14:56.941814 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:14:56.941824 | orchestrator | 2026-02-23 20:14:56.941835 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-23 20:14:56.941846 | orchestrator | Monday 23 February 2026 20:14:47 +0000 (0:00:08.904) 0:05:10.002 ******* 2026-02-23 20:14:56.941856 | orchestrator | ok: [testbed-manager] 2026-02-23 20:14:56.941891 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:14:56.941903 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:14:56.941914 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:14:56.941924 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:14:56.941934 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:14:56.941945 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:14:56.941956 | orchestrator | 2026-02-23 20:14:56.941967 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-23 20:14:56.941987 | orchestrator | Monday 23 February 2026 20:14:49 +0000 (0:00:02.061) 0:05:12.064 ******* 2026-02-23 20:14:56.941997 | orchestrator | changed: [testbed-manager] 2026-02-23 20:14:56.942006 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:14:56.942073 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:14:56.942085 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:14:56.942095 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:14:56.942105 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:14:56.942114 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:14:56.942124 | orchestrator | 2026-02-23 20:14:56.942147 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-23 20:15:08.604565 | orchestrator | Monday 23 February 2026 20:14:56 +0000 (0:00:07.015) 0:05:19.079 ******* 2026-02-23 20:15:08.604658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:15:08.604670 | orchestrator | 2026-02-23 20:15:08.604678 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-23 20:15:08.604685 | orchestrator | Monday 23 February 2026 20:14:57 +0000 (0:00:00.425) 0:05:19.505 ******* 2026-02-23 20:15:08.604691 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:15:08.604699 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:15:08.604705 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:15:08.604711 | orchestrator | changed: [testbed-manager] 2026-02-23 20:15:08.604717 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:15:08.604723 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:15:08.604729 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:15:08.604736 | orchestrator | 2026-02-23 20:15:08.604742 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-23 20:15:08.604748 | orchestrator | Monday 23 February 2026 20:14:58 +0000 (0:00:00.761) 0:05:20.266 ******* 2026-02-23 20:15:08.604754 | orchestrator | ok: [testbed-manager] 2026-02-23 20:15:08.604761 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:15:08.604768 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:15:08.604774 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:15:08.604780 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:15:08.604786 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:15:08.604792 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:15:08.604798 | orchestrator | 2026-02-23 20:15:08.604804 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-23 20:15:08.604812 | orchestrator | Monday 23 February 2026 20:15:00 +0000 (0:00:01.963) 0:05:22.229 ******* 2026-02-23 20:15:08.604823 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:15:08.604838 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:15:08.604849 | orchestrator | changed: [testbed-manager] 2026-02-23 20:15:08.604859 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:15:08.604868 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:15:08.604879 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:15:08.604890 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:15:08.604901 | orchestrator | 2026-02-23 20:15:08.604911 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-23 20:15:08.604918 | orchestrator | Monday 23 February 2026 20:15:00 +0000 (0:00:00.850) 0:05:23.080 ******* 2026-02-23 20:15:08.604924 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:15:08.604930 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:15:08.604936 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:15:08.604942 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:15:08.604949 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:15:08.604955 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:15:08.604961 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:15:08.604967 | orchestrator | 2026-02-23 20:15:08.604973 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-23 20:15:08.604980 | orchestrator | Monday 23 February 2026 20:15:01 +0000 (0:00:00.273) 0:05:23.353 ******* 2026-02-23 20:15:08.605005 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:15:08.605011 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:15:08.605017 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:15:08.605023 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:15:08.605029 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:15:08.605035 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:15:08.605042 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:15:08.605048 | orchestrator | 2026-02-23 20:15:08.605054 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-23 20:15:08.605060 | orchestrator | Monday 23 February 2026 20:15:01 +0000 (0:00:00.367) 0:05:23.721 ******* 2026-02-23 20:15:08.605066 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:15:08.605072 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:15:08.605078 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:15:08.605084 | orchestrator | ok: [testbed-manager] 2026-02-23 20:15:08.605090 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:15:08.605107 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:15:08.605114 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:15:08.605121 | orchestrator | 2026-02-23 20:15:08.605129 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-23 20:15:08.605136 | orchestrator | Monday 23 February 2026 20:15:01 +0000 (0:00:00.294) 0:05:24.015 ******* 2026-02-23 20:15:08.605143 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:15:08.605150 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:15:08.605157 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:15:08.605164 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:15:08.605171 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:15:08.605178 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:15:08.605185 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:15:08.605192 | orchestrator | 2026-02-23 20:15:08.605199 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-23 20:15:08.605207 | orchestrator | Monday 23 February 2026 20:15:02 +0000 (0:00:00.268) 0:05:24.284 ******* 2026-02-23 20:15:08.605214 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:15:08.605266 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:15:08.605273 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:15:08.605280 | orchestrator | ok: [testbed-manager] 2026-02-23 20:15:08.605288 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:15:08.605295 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:15:08.605302 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:15:08.605309 | orchestrator | 2026-02-23 20:15:08.605316 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-23 20:15:08.605323 | orchestrator | Monday 23 February 2026 20:15:02 +0000 (0:00:00.304) 0:05:24.588 ******* 2026-02-23 20:15:08.605330 | orchestrator | ok: [testbed-node-3] =>  2026-02-23 20:15:08.605338 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605345 | orchestrator | ok: [testbed-node-4] =>  2026-02-23 20:15:08.605352 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605359 | orchestrator | ok: [testbed-node-5] =>  2026-02-23 20:15:08.605366 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605374 | orchestrator | ok: [testbed-manager] =>  2026-02-23 20:15:08.605380 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605402 | orchestrator | ok: [testbed-node-0] =>  2026-02-23 20:15:08.605409 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605416 | orchestrator | ok: [testbed-node-1] =>  2026-02-23 20:15:08.605422 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605428 | orchestrator | ok: [testbed-node-2] =>  2026-02-23 20:15:08.605434 | orchestrator |  docker_version: 5:27.5.1 2026-02-23 20:15:08.605440 | orchestrator | 2026-02-23 20:15:08.605446 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-23 20:15:08.605454 | orchestrator | Monday 23 February 2026 20:15:02 +0000 (0:00:00.255) 0:05:24.843 ******* 2026-02-23 20:15:08.605464 | orchestrator | ok: [testbed-node-3] =>  2026-02-23 20:15:08.605485 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605500 | orchestrator | ok: [testbed-node-4] =>  2026-02-23 20:15:08.605509 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605519 | orchestrator | ok: [testbed-node-5] =>  2026-02-23 20:15:08.605528 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605537 | orchestrator | ok: [testbed-manager] =>  2026-02-23 20:15:08.605547 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605557 | orchestrator | ok: [testbed-node-0] =>  2026-02-23 20:15:08.605567 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605578 | orchestrator | ok: [testbed-node-1] =>  2026-02-23 20:15:08.605588 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605598 | orchestrator | ok: [testbed-node-2] =>  2026-02-23 20:15:08.605609 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-23 20:15:08.605615 | orchestrator | 2026-02-23 20:15:08.605622 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-23 20:15:08.605627 | orchestrator | Monday 23 February 2026 20:15:02 +0000 (0:00:00.272) 0:05:25.115 ******* 2026-02-23 20:15:08.605633 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:15:08.605639 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:15:08.605645 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:15:08.605651 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:15:08.605657 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:15:08.605663 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:15:08.605669 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:15:08.605676 | orchestrator | 2026-02-23 20:15:08.605682 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-23 20:15:08.605688 | orchestrator | Monday 23 February 2026 20:15:03 +0000 (0:00:00.304) 0:05:25.420 ******* 2026-02-23 20:15:08.605694 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:15:08.605700 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:15:08.605706 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:15:08.605712 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:15:08.605719 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:15:08.605728 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:15:08.605737 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:15:08.605747 | orchestrator | 2026-02-23 20:15:08.605757 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-23 20:15:08.605767 | orchestrator | Monday 23 February 2026 20:15:03 +0000 (0:00:00.350) 0:05:25.771 ******* 2026-02-23 20:15:08.605778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:15:08.605791 | orchestrator | 2026-02-23 20:15:08.605798 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-23 20:15:08.605804 | orchestrator | Monday 23 February 2026 20:15:04 +0000 (0:00:00.391) 0:05:26.163 ******* 2026-02-23 20:15:08.605811 | orchestrator | ok: [testbed-manager] 2026-02-23 20:15:08.605821 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:15:08.605838 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:15:08.605849 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:15:08.605858 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:15:08.605869 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:15:08.605880 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:15:08.605890 | orchestrator | 2026-02-23 20:15:08.605901 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-23 20:15:08.605908 | orchestrator | Monday 23 February 2026 20:15:05 +0000 (0:00:01.013) 0:05:27.177 ******* 2026-02-23 20:15:08.605921 | orchestrator | ok: [testbed-manager] 2026-02-23 20:15:08.605928 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:15:08.605934 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:15:08.605940 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:15:08.605946 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:15:08.605959 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:15:08.605965 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:15:08.605971 | orchestrator | 2026-02-23 20:15:08.605978 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-23 20:15:08.605986 | orchestrator | Monday 23 February 2026 20:15:08 +0000 (0:00:03.198) 0:05:30.375 ******* 2026-02-23 20:15:08.605993 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-23 20:15:08.605999 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-23 20:15:08.606005 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-23 20:15:08.606011 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-23 20:15:08.606124 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-23 20:15:08.606147 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:15:08.606154 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-23 20:15:08.606160 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-23 20:15:08.606166 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-23 20:15:08.606173 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-23 20:15:08.606179 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:15:08.606185 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-23 20:15:08.606191 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-23 20:15:08.606197 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:15:08.606203 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-23 20:15:08.606209 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-23 20:15:08.606273 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-23 20:16:15.313381 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-23 20:16:15.313464 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:15.313474 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-23 20:16:15.313480 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-23 20:16:15.313486 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-23 20:16:15.313557 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:15.313564 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:15.313570 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-23 20:16:15.313575 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-23 20:16:15.313581 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-23 20:16:15.313586 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:15.313592 | orchestrator | 2026-02-23 20:16:15.313598 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-23 20:16:15.313605 | orchestrator | Monday 23 February 2026 20:15:08 +0000 (0:00:00.726) 0:05:31.102 ******* 2026-02-23 20:16:15.313610 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.313615 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.313621 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.313626 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.313631 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.313636 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.313641 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.313646 | orchestrator | 2026-02-23 20:16:15.313651 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-23 20:16:15.313656 | orchestrator | Monday 23 February 2026 20:15:17 +0000 (0:00:08.337) 0:05:39.440 ******* 2026-02-23 20:16:15.313661 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.313666 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.313671 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.313676 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.313681 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.313686 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.313709 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.313715 | orchestrator | 2026-02-23 20:16:15.313720 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-23 20:16:15.313725 | orchestrator | Monday 23 February 2026 20:15:18 +0000 (0:00:01.078) 0:05:40.518 ******* 2026-02-23 20:16:15.313730 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.313735 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.313740 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.313745 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.313750 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.313755 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.313760 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.313765 | orchestrator | 2026-02-23 20:16:15.313770 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-23 20:16:15.313775 | orchestrator | Monday 23 February 2026 20:15:27 +0000 (0:00:09.626) 0:05:50.145 ******* 2026-02-23 20:16:15.313781 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.313786 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.313790 | orchestrator | changed: [testbed-manager] 2026-02-23 20:16:15.313796 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.313801 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.313805 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.313810 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.313815 | orchestrator | 2026-02-23 20:16:15.313820 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-23 20:16:15.313825 | orchestrator | Monday 23 February 2026 20:15:31 +0000 (0:00:03.466) 0:05:53.612 ******* 2026-02-23 20:16:15.313830 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.313835 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.313840 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.313845 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.313850 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.313855 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.313860 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.313865 | orchestrator | 2026-02-23 20:16:15.313883 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-23 20:16:15.313889 | orchestrator | Monday 23 February 2026 20:15:33 +0000 (0:00:01.555) 0:05:55.167 ******* 2026-02-23 20:16:15.313895 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.313900 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.313906 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.313911 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.313917 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.313923 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.313929 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.313934 | orchestrator | 2026-02-23 20:16:15.313940 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-23 20:16:15.313946 | orchestrator | Monday 23 February 2026 20:15:34 +0000 (0:00:01.377) 0:05:56.545 ******* 2026-02-23 20:16:15.313951 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:15.313957 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:15.313963 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:15.313969 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:15.313974 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:15.313980 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:15.313986 | orchestrator | changed: [testbed-manager] 2026-02-23 20:16:15.313991 | orchestrator | 2026-02-23 20:16:15.313997 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-23 20:16:15.314003 | orchestrator | Monday 23 February 2026 20:15:35 +0000 (0:00:00.854) 0:05:57.400 ******* 2026-02-23 20:16:15.314008 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.314056 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.314062 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.314073 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.314079 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.314085 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.314090 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.314096 | orchestrator | 2026-02-23 20:16:15.314102 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-23 20:16:15.314121 | orchestrator | Monday 23 February 2026 20:15:45 +0000 (0:00:10.001) 0:06:07.402 ******* 2026-02-23 20:16:15.314127 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.314132 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.314138 | orchestrator | changed: [testbed-manager] 2026-02-23 20:16:15.314144 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.314149 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.314155 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.314161 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.314166 | orchestrator | 2026-02-23 20:16:15.314172 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-23 20:16:15.314177 | orchestrator | Monday 23 February 2026 20:15:46 +0000 (0:00:01.005) 0:06:08.407 ******* 2026-02-23 20:16:15.314183 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.314189 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.314194 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.314200 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.314206 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.314212 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.314217 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.314223 | orchestrator | 2026-02-23 20:16:15.314229 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-23 20:16:15.314234 | orchestrator | Monday 23 February 2026 20:15:56 +0000 (0:00:10.140) 0:06:18.547 ******* 2026-02-23 20:16:15.314240 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.314245 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.314250 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.314255 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.314260 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.314265 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.314270 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.314275 | orchestrator | 2026-02-23 20:16:15.314280 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-23 20:16:15.314285 | orchestrator | Monday 23 February 2026 20:16:08 +0000 (0:00:11.725) 0:06:30.273 ******* 2026-02-23 20:16:15.314311 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-23 20:16:15.314317 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-23 20:16:15.314322 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-23 20:16:15.314326 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-23 20:16:15.314332 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-23 20:16:15.314337 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-23 20:16:15.314341 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-23 20:16:15.314347 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-23 20:16:15.314352 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-23 20:16:15.314356 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-23 20:16:15.314361 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-23 20:16:15.314366 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-23 20:16:15.314397 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-23 20:16:15.314403 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-23 20:16:15.314408 | orchestrator | 2026-02-23 20:16:15.314413 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-23 20:16:15.314418 | orchestrator | Monday 23 February 2026 20:16:09 +0000 (0:00:01.162) 0:06:31.436 ******* 2026-02-23 20:16:15.314429 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:15.314434 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:15.314439 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:15.314444 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:15.314449 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:15.314454 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:15.314458 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:15.314463 | orchestrator | 2026-02-23 20:16:15.314468 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-23 20:16:15.314475 | orchestrator | Monday 23 February 2026 20:16:09 +0000 (0:00:00.494) 0:06:31.931 ******* 2026-02-23 20:16:15.314484 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:15.314491 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:15.314499 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:15.314507 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:15.314516 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:15.314525 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:15.314533 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:15.314540 | orchestrator | 2026-02-23 20:16:15.314548 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-23 20:16:15.314557 | orchestrator | Monday 23 February 2026 20:16:14 +0000 (0:00:04.627) 0:06:36.559 ******* 2026-02-23 20:16:15.314564 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:15.314572 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:15.314579 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:15.314587 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:15.314595 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:15.314603 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:15.314611 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:15.314618 | orchestrator | 2026-02-23 20:16:15.314667 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-23 20:16:15.314677 | orchestrator | Monday 23 February 2026 20:16:15 +0000 (0:00:00.621) 0:06:37.180 ******* 2026-02-23 20:16:15.314685 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-23 20:16:15.314693 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-23 20:16:15.314701 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:15.314709 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-23 20:16:15.314717 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-23 20:16:15.314725 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:15.314733 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-23 20:16:15.314741 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-23 20:16:15.314749 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:15.314765 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-23 20:16:35.905094 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-23 20:16:35.905248 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:35.905278 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-23 20:16:35.905299 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-23 20:16:35.905352 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:35.905370 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-23 20:16:35.905389 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-23 20:16:35.905407 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:35.905425 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-23 20:16:35.905444 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-23 20:16:35.905462 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:35.905481 | orchestrator | 2026-02-23 20:16:35.905503 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-23 20:16:35.905556 | orchestrator | Monday 23 February 2026 20:16:15 +0000 (0:00:00.561) 0:06:37.742 ******* 2026-02-23 20:16:35.905575 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:35.905594 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:35.905612 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:35.905630 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:35.905649 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:35.905669 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:35.905688 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:35.905699 | orchestrator | 2026-02-23 20:16:35.905709 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-23 20:16:35.905720 | orchestrator | Monday 23 February 2026 20:16:16 +0000 (0:00:00.491) 0:06:38.233 ******* 2026-02-23 20:16:35.905731 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:35.905741 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:35.905752 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:35.905762 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:35.905773 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:35.905783 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:35.905794 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:35.905804 | orchestrator | 2026-02-23 20:16:35.905815 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-23 20:16:35.905826 | orchestrator | Monday 23 February 2026 20:16:16 +0000 (0:00:00.496) 0:06:38.730 ******* 2026-02-23 20:16:35.905837 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:16:35.905847 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:16:35.905858 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:16:35.905871 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:35.905890 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:16:35.905908 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:16:35.906405 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:16:35.906418 | orchestrator | 2026-02-23 20:16:35.906430 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-23 20:16:35.906441 | orchestrator | Monday 23 February 2026 20:16:17 +0000 (0:00:00.685) 0:06:39.415 ******* 2026-02-23 20:16:35.906452 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.906470 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:16:35.906492 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:16:35.906762 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:16:35.906790 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:16:35.906807 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:16:35.906826 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:16:35.906844 | orchestrator | 2026-02-23 20:16:35.906862 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-23 20:16:35.906881 | orchestrator | Monday 23 February 2026 20:16:19 +0000 (0:00:02.012) 0:06:41.427 ******* 2026-02-23 20:16:35.906915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:16:35.906945 | orchestrator | 2026-02-23 20:16:35.907005 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-23 20:16:35.907025 | orchestrator | Monday 23 February 2026 20:16:20 +0000 (0:00:00.871) 0:06:42.299 ******* 2026-02-23 20:16:35.907044 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:35.907063 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:35.907081 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:35.907096 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.907107 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:35.907118 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:35.907129 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:35.907140 | orchestrator | 2026-02-23 20:16:35.907151 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-23 20:16:35.907178 | orchestrator | Monday 23 February 2026 20:16:20 +0000 (0:00:00.830) 0:06:43.130 ******* 2026-02-23 20:16:35.907189 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:35.907199 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:35.907210 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:35.907226 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.907251 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:35.907271 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:35.907288 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:35.907376 | orchestrator | 2026-02-23 20:16:35.907396 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-23 20:16:35.907412 | orchestrator | Monday 23 February 2026 20:16:22 +0000 (0:00:01.080) 0:06:44.210 ******* 2026-02-23 20:16:35.907427 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:35.907444 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.907461 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:35.907820 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:35.907841 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:35.907860 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:35.907877 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:35.907895 | orchestrator | 2026-02-23 20:16:35.907912 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-23 20:16:35.907963 | orchestrator | Monday 23 February 2026 20:16:23 +0000 (0:00:01.336) 0:06:45.546 ******* 2026-02-23 20:16:35.907982 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:16:35.907999 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:16:35.908016 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:16:35.908033 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:16:35.908051 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:16:35.908068 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:16:35.908086 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:16:35.908103 | orchestrator | 2026-02-23 20:16:35.908122 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-23 20:16:35.908139 | orchestrator | Monday 23 February 2026 20:16:24 +0000 (0:00:01.407) 0:06:46.954 ******* 2026-02-23 20:16:35.908157 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:35.908175 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:35.908194 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.908213 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:35.908230 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:35.908247 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:35.908258 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:35.908268 | orchestrator | 2026-02-23 20:16:35.908279 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-23 20:16:35.908289 | orchestrator | Monday 23 February 2026 20:16:26 +0000 (0:00:01.300) 0:06:48.255 ******* 2026-02-23 20:16:35.908300 | orchestrator | changed: [testbed-manager] 2026-02-23 20:16:35.908521 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:16:35.908541 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:16:35.908552 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:16:35.908563 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:16:35.908573 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:16:35.908584 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:16:35.908595 | orchestrator | 2026-02-23 20:16:35.908607 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-23 20:16:35.908618 | orchestrator | Monday 23 February 2026 20:16:27 +0000 (0:00:01.509) 0:06:49.765 ******* 2026-02-23 20:16:35.908634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:16:35.908654 | orchestrator | 2026-02-23 20:16:35.908672 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-23 20:16:35.908689 | orchestrator | Monday 23 February 2026 20:16:28 +0000 (0:00:00.994) 0:06:50.759 ******* 2026-02-23 20:16:35.908733 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:16:35.908745 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:16:35.908756 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:16:35.908766 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.908777 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:16:35.908787 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:16:35.908798 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:16:35.908808 | orchestrator | 2026-02-23 20:16:35.908819 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-23 20:16:35.908830 | orchestrator | Monday 23 February 2026 20:16:30 +0000 (0:00:01.522) 0:06:52.282 ******* 2026-02-23 20:16:35.908840 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:16:35.908851 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:16:35.908861 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.908872 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:16:35.908882 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:16:35.908892 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:16:35.908903 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:16:35.908913 | orchestrator | 2026-02-23 20:16:35.908924 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-23 20:16:35.908934 | orchestrator | Monday 23 February 2026 20:16:31 +0000 (0:00:01.145) 0:06:53.427 ******* 2026-02-23 20:16:35.908945 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:16:35.908955 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:16:35.908966 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:16:35.908988 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.908999 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:16:35.909010 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:16:35.909021 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:16:35.909031 | orchestrator | 2026-02-23 20:16:35.909043 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-23 20:16:35.909056 | orchestrator | Monday 23 February 2026 20:16:33 +0000 (0:00:02.465) 0:06:55.893 ******* 2026-02-23 20:16:35.909074 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:16:35.909092 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:16:35.909108 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:16:35.909124 | orchestrator | ok: [testbed-manager] 2026-02-23 20:16:35.909140 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:16:35.909158 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:16:35.909174 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:16:35.909192 | orchestrator | 2026-02-23 20:16:35.909209 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-23 20:16:35.909226 | orchestrator | Monday 23 February 2026 20:16:34 +0000 (0:00:01.145) 0:06:57.039 ******* 2026-02-23 20:16:35.909246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:16:35.909269 | orchestrator | 2026-02-23 20:16:35.909287 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:16:35.909333 | orchestrator | Monday 23 February 2026 20:16:35 +0000 (0:00:00.866) 0:06:57.905 ******* 2026-02-23 20:16:35.909352 | orchestrator | 2026-02-23 20:16:35.909370 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:16:35.909387 | orchestrator | Monday 23 February 2026 20:16:35 +0000 (0:00:00.041) 0:06:57.947 ******* 2026-02-23 20:16:35.909406 | orchestrator | 2026-02-23 20:16:35.909423 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:16:35.909442 | orchestrator | Monday 23 February 2026 20:16:35 +0000 (0:00:00.048) 0:06:57.996 ******* 2026-02-23 20:16:35.909456 | orchestrator | 2026-02-23 20:16:35.909467 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:16:35.909497 | orchestrator | Monday 23 February 2026 20:16:35 +0000 (0:00:00.048) 0:06:58.044 ******* 2026-02-23 20:17:03.330833 | orchestrator | 2026-02-23 20:17:03.330927 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:17:03.330964 | orchestrator | Monday 23 February 2026 20:16:35 +0000 (0:00:00.039) 0:06:58.083 ******* 2026-02-23 20:17:03.330974 | orchestrator | 2026-02-23 20:17:03.330984 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:17:03.330993 | orchestrator | Monday 23 February 2026 20:16:35 +0000 (0:00:00.045) 0:06:58.128 ******* 2026-02-23 20:17:03.331002 | orchestrator | 2026-02-23 20:17:03.331009 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-23 20:17:03.331014 | orchestrator | Monday 23 February 2026 20:16:36 +0000 (0:00:00.040) 0:06:58.168 ******* 2026-02-23 20:17:03.331053 | orchestrator | 2026-02-23 20:17:03.331063 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-23 20:17:03.331073 | orchestrator | Monday 23 February 2026 20:16:36 +0000 (0:00:00.049) 0:06:58.218 ******* 2026-02-23 20:17:03.331083 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:03.331093 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:03.331102 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:03.331108 | orchestrator | 2026-02-23 20:17:03.331114 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-23 20:17:03.331119 | orchestrator | Monday 23 February 2026 20:16:37 +0000 (0:00:01.270) 0:06:59.489 ******* 2026-02-23 20:17:03.331125 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:03.331132 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:03.331137 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:03.331143 | orchestrator | changed: [testbed-manager] 2026-02-23 20:17:03.331148 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:03.331154 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:03.331159 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:03.331165 | orchestrator | 2026-02-23 20:17:03.331170 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-23 20:17:03.331176 | orchestrator | Monday 23 February 2026 20:16:38 +0000 (0:00:01.521) 0:07:01.010 ******* 2026-02-23 20:17:03.331181 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:03.331187 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:03.331192 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:03.331197 | orchestrator | changed: [testbed-manager] 2026-02-23 20:17:03.331203 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:03.331208 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:03.331213 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:03.331218 | orchestrator | 2026-02-23 20:17:03.331224 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-23 20:17:03.331229 | orchestrator | Monday 23 February 2026 20:16:40 +0000 (0:00:01.204) 0:07:02.214 ******* 2026-02-23 20:17:03.331235 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:03.331240 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:03.331245 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:03.331251 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:03.331256 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:03.331262 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:03.331267 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:03.331272 | orchestrator | 2026-02-23 20:17:03.331278 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-23 20:17:03.331283 | orchestrator | Monday 23 February 2026 20:16:42 +0000 (0:00:02.428) 0:07:04.643 ******* 2026-02-23 20:17:03.331288 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:03.331294 | orchestrator | 2026-02-23 20:17:03.331299 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-23 20:17:03.331305 | orchestrator | Monday 23 February 2026 20:16:42 +0000 (0:00:00.076) 0:07:04.719 ******* 2026-02-23 20:17:03.331310 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:03.331315 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:03.331321 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:03.331326 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:03.331381 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:03.331387 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:03.331393 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:03.331399 | orchestrator | 2026-02-23 20:17:03.331415 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-23 20:17:03.331423 | orchestrator | Monday 23 February 2026 20:16:43 +0000 (0:00:00.990) 0:07:05.710 ******* 2026-02-23 20:17:03.331429 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:03.331435 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:03.331442 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:03.331448 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:03.331454 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:03.331460 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:03.331466 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:03.331472 | orchestrator | 2026-02-23 20:17:03.331479 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-23 20:17:03.331485 | orchestrator | Monday 23 February 2026 20:16:44 +0000 (0:00:00.720) 0:07:06.431 ******* 2026-02-23 20:17:03.331493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:17:03.331502 | orchestrator | 2026-02-23 20:17:03.331508 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-23 20:17:03.331514 | orchestrator | Monday 23 February 2026 20:16:45 +0000 (0:00:00.881) 0:07:07.312 ******* 2026-02-23 20:17:03.331521 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:03.331527 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:03.331533 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:03.331539 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:03.331545 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:03.331551 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:03.331558 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:03.331564 | orchestrator | 2026-02-23 20:17:03.331570 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-23 20:17:03.331577 | orchestrator | Monday 23 February 2026 20:16:46 +0000 (0:00:00.922) 0:07:08.235 ******* 2026-02-23 20:17:03.331583 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-23 20:17:03.331602 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-23 20:17:03.331608 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-23 20:17:03.331615 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-23 20:17:03.331621 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-23 20:17:03.331627 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-23 20:17:03.331634 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-23 20:17:03.331640 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-23 20:17:03.331647 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-23 20:17:03.331653 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-23 20:17:03.331659 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-23 20:17:03.331665 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-23 20:17:03.331671 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-23 20:17:03.331677 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-23 20:17:03.331684 | orchestrator | 2026-02-23 20:17:03.331690 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-23 20:17:03.331696 | orchestrator | Monday 23 February 2026 20:16:48 +0000 (0:00:02.878) 0:07:11.114 ******* 2026-02-23 20:17:03.331702 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:03.331709 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:03.331715 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:03.331725 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:03.331732 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:03.331738 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:03.331744 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:03.331751 | orchestrator | 2026-02-23 20:17:03.331757 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-23 20:17:03.331763 | orchestrator | Monday 23 February 2026 20:16:49 +0000 (0:00:00.501) 0:07:11.615 ******* 2026-02-23 20:17:03.331770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:17:03.331778 | orchestrator | 2026-02-23 20:17:03.331783 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-23 20:17:03.331789 | orchestrator | Monday 23 February 2026 20:16:50 +0000 (0:00:00.800) 0:07:12.416 ******* 2026-02-23 20:17:03.331794 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:03.331799 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:03.331805 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:03.331810 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:03.331816 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:03.331821 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:03.331826 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:03.331831 | orchestrator | 2026-02-23 20:17:03.331837 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-23 20:17:03.331842 | orchestrator | Monday 23 February 2026 20:16:51 +0000 (0:00:01.033) 0:07:13.449 ******* 2026-02-23 20:17:03.331848 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:03.331853 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:03.331858 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:03.331864 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:03.331869 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:03.331874 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:03.331883 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:03.331895 | orchestrator | 2026-02-23 20:17:03.331907 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-23 20:17:03.331916 | orchestrator | Monday 23 February 2026 20:16:52 +0000 (0:00:00.811) 0:07:14.261 ******* 2026-02-23 20:17:03.331926 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:03.331957 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:03.331964 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:03.331974 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:03.331979 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:03.331985 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:03.331999 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:03.332005 | orchestrator | 2026-02-23 20:17:03.332011 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-23 20:17:03.332017 | orchestrator | Monday 23 February 2026 20:16:52 +0000 (0:00:00.528) 0:07:14.789 ******* 2026-02-23 20:17:03.332026 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:03.332041 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:03.332051 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:03.332061 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:03.332071 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:03.332080 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:03.332089 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:03.332097 | orchestrator | 2026-02-23 20:17:03.332103 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-23 20:17:03.332108 | orchestrator | Monday 23 February 2026 20:16:54 +0000 (0:00:01.887) 0:07:16.677 ******* 2026-02-23 20:17:03.332114 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:03.332119 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:03.332125 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:03.332130 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:03.332135 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:03.332149 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:03.332154 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:03.332160 | orchestrator | 2026-02-23 20:17:03.332165 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-23 20:17:03.332171 | orchestrator | Monday 23 February 2026 20:16:55 +0000 (0:00:00.475) 0:07:17.153 ******* 2026-02-23 20:17:03.332176 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:03.332181 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:03.332187 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:03.332192 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:03.332197 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:03.332203 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:03.332213 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:34.909325 | orchestrator | 2026-02-23 20:17:34.909515 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-23 20:17:34.909535 | orchestrator | Monday 23 February 2026 20:17:03 +0000 (0:00:08.373) 0:07:25.526 ******* 2026-02-23 20:17:34.909548 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.909560 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:34.909572 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:34.909583 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:34.909594 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:34.909605 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:34.909615 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:34.909626 | orchestrator | 2026-02-23 20:17:34.909637 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-23 20:17:34.909648 | orchestrator | Monday 23 February 2026 20:17:04 +0000 (0:00:01.249) 0:07:26.775 ******* 2026-02-23 20:17:34.909659 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.909669 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:34.909680 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:34.909691 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:34.909704 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:34.909724 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:34.909741 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:34.909759 | orchestrator | 2026-02-23 20:17:34.909776 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-23 20:17:34.909795 | orchestrator | Monday 23 February 2026 20:17:06 +0000 (0:00:01.694) 0:07:28.470 ******* 2026-02-23 20:17:34.909814 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.909833 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:34.909851 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:34.909867 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:34.909880 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:34.909892 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:34.909905 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:34.909916 | orchestrator | 2026-02-23 20:17:34.909928 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-23 20:17:34.909940 | orchestrator | Monday 23 February 2026 20:17:07 +0000 (0:00:01.647) 0:07:30.117 ******* 2026-02-23 20:17:34.909953 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.909965 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.909977 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.909989 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.910002 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.910098 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.910113 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.910125 | orchestrator | 2026-02-23 20:17:34.910138 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-23 20:17:34.910150 | orchestrator | Monday 23 February 2026 20:17:09 +0000 (0:00:01.131) 0:07:31.249 ******* 2026-02-23 20:17:34.910163 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:34.910175 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:34.910187 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:34.910226 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:34.910238 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:34.910248 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:34.910259 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:34.910271 | orchestrator | 2026-02-23 20:17:34.910282 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-23 20:17:34.910293 | orchestrator | Monday 23 February 2026 20:17:09 +0000 (0:00:00.888) 0:07:32.137 ******* 2026-02-23 20:17:34.910304 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:34.910315 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:34.910326 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:34.910336 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:34.910347 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:34.910358 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:34.910399 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:34.910410 | orchestrator | 2026-02-23 20:17:34.910426 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-23 20:17:34.910452 | orchestrator | Monday 23 February 2026 20:17:10 +0000 (0:00:00.497) 0:07:32.635 ******* 2026-02-23 20:17:34.910475 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.910492 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.910509 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.910526 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.910544 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.910563 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.910580 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.910598 | orchestrator | 2026-02-23 20:17:34.910616 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-23 20:17:34.910632 | orchestrator | Monday 23 February 2026 20:17:10 +0000 (0:00:00.510) 0:07:33.145 ******* 2026-02-23 20:17:34.910648 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.910665 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.910681 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.910700 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.910749 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.910768 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.910787 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.910805 | orchestrator | 2026-02-23 20:17:34.910822 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-23 20:17:34.910834 | orchestrator | Monday 23 February 2026 20:17:11 +0000 (0:00:00.657) 0:07:33.803 ******* 2026-02-23 20:17:34.910844 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.910855 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.910866 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.910876 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.910887 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.910897 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.910908 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.910918 | orchestrator | 2026-02-23 20:17:34.910929 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-23 20:17:34.910940 | orchestrator | Monday 23 February 2026 20:17:12 +0000 (0:00:00.430) 0:07:34.234 ******* 2026-02-23 20:17:34.910950 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.910961 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.910971 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.910982 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.910992 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.911003 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.911031 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.911043 | orchestrator | 2026-02-23 20:17:34.911074 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-23 20:17:34.911087 | orchestrator | Monday 23 February 2026 20:17:16 +0000 (0:00:04.800) 0:07:39.035 ******* 2026-02-23 20:17:34.911097 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:17:34.911108 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:17:34.911130 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:17:34.911141 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:17:34.911152 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:17:34.911162 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:17:34.911172 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:17:34.911183 | orchestrator | 2026-02-23 20:17:34.911194 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-23 20:17:34.911204 | orchestrator | Monday 23 February 2026 20:17:17 +0000 (0:00:00.431) 0:07:39.466 ******* 2026-02-23 20:17:34.911217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:17:34.911231 | orchestrator | 2026-02-23 20:17:34.911242 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-23 20:17:34.911253 | orchestrator | Monday 23 February 2026 20:17:18 +0000 (0:00:00.879) 0:07:40.345 ******* 2026-02-23 20:17:34.911263 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.911274 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.911284 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.911295 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.911305 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.911316 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.911326 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.911337 | orchestrator | 2026-02-23 20:17:34.911347 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-23 20:17:34.911358 | orchestrator | Monday 23 February 2026 20:17:19 +0000 (0:00:01.798) 0:07:42.144 ******* 2026-02-23 20:17:34.911401 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.911413 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.911423 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.911434 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.911444 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.911455 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.911465 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.911476 | orchestrator | 2026-02-23 20:17:34.911487 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-23 20:17:34.911497 | orchestrator | Monday 23 February 2026 20:17:21 +0000 (0:00:01.112) 0:07:43.256 ******* 2026-02-23 20:17:34.911508 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:17:34.911518 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:17:34.911529 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:17:34.911539 | orchestrator | ok: [testbed-manager] 2026-02-23 20:17:34.911550 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:17:34.911560 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:17:34.911571 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:17:34.911581 | orchestrator | 2026-02-23 20:17:34.911592 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-23 20:17:34.911602 | orchestrator | Monday 23 February 2026 20:17:21 +0000 (0:00:00.833) 0:07:44.090 ******* 2026-02-23 20:17:34.911613 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911626 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911637 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911653 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911665 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911675 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911692 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-23 20:17:34.911703 | orchestrator | 2026-02-23 20:17:34.911714 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-23 20:17:34.911724 | orchestrator | Monday 23 February 2026 20:17:23 +0000 (0:00:02.016) 0:07:46.106 ******* 2026-02-23 20:17:34.911735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:17:34.911746 | orchestrator | 2026-02-23 20:17:34.911757 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-23 20:17:34.911768 | orchestrator | Monday 23 February 2026 20:17:24 +0000 (0:00:00.787) 0:07:46.894 ******* 2026-02-23 20:17:34.911778 | orchestrator | changed: [testbed-manager] 2026-02-23 20:17:34.911789 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:17:34.911799 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:17:34.911810 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:17:34.911820 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:17:34.911831 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:17:34.911841 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:17:34.911852 | orchestrator | 2026-02-23 20:17:34.911870 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-23 20:18:05.999163 | orchestrator | Monday 23 February 2026 20:17:34 +0000 (0:00:10.155) 0:07:57.049 ******* 2026-02-23 20:18:05.999309 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:18:05.999340 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:18:05.999361 | orchestrator | ok: [testbed-manager] 2026-02-23 20:18:05.999426 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:18:05.999448 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:18:05.999467 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:18:05.999485 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:18:05.999503 | orchestrator | 2026-02-23 20:18:05.999524 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-23 20:18:05.999544 | orchestrator | Monday 23 February 2026 20:17:36 +0000 (0:00:01.858) 0:07:58.908 ******* 2026-02-23 20:18:05.999563 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:18:05.999583 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:18:05.999651 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:18:05.999674 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:18:05.999693 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:18:05.999711 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:18:05.999731 | orchestrator | 2026-02-23 20:18:05.999749 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-23 20:18:05.999767 | orchestrator | Monday 23 February 2026 20:17:38 +0000 (0:00:01.326) 0:08:00.235 ******* 2026-02-23 20:18:05.999786 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:05.999807 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:05.999825 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:05.999843 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:05.999863 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:05.999883 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:05.999901 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:05.999921 | orchestrator | 2026-02-23 20:18:05.999942 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-23 20:18:05.999960 | orchestrator | 2026-02-23 20:18:05.999978 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-23 20:18:05.999996 | orchestrator | Monday 23 February 2026 20:17:39 +0000 (0:00:01.501) 0:08:01.737 ******* 2026-02-23 20:18:06.000014 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:18:06.000033 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:18:06.000089 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:18:06.000138 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:18:06.000158 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:18:06.000176 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:18:06.000194 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:18:06.000212 | orchestrator | 2026-02-23 20:18:06.000270 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-23 20:18:06.000291 | orchestrator | 2026-02-23 20:18:06.000309 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-23 20:18:06.000328 | orchestrator | Monday 23 February 2026 20:17:40 +0000 (0:00:00.487) 0:08:02.224 ******* 2026-02-23 20:18:06.000345 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.000364 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.000412 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.000431 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.000450 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.000470 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.000489 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.000507 | orchestrator | 2026-02-23 20:18:06.000526 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-23 20:18:06.000545 | orchestrator | Monday 23 February 2026 20:17:41 +0000 (0:00:01.373) 0:08:03.598 ******* 2026-02-23 20:18:06.000563 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:18:06.000582 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:18:06.000598 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:18:06.000616 | orchestrator | ok: [testbed-manager] 2026-02-23 20:18:06.000634 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:18:06.000652 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:18:06.000669 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:18:06.000688 | orchestrator | 2026-02-23 20:18:06.000707 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-23 20:18:06.000726 | orchestrator | Monday 23 February 2026 20:17:42 +0000 (0:00:01.402) 0:08:05.000 ******* 2026-02-23 20:18:06.000746 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:18:06.000787 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:18:06.000806 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:18:06.000875 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:18:06.000896 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:18:06.000914 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:18:06.000931 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:18:06.000950 | orchestrator | 2026-02-23 20:18:06.000969 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-23 20:18:06.000990 | orchestrator | Monday 23 February 2026 20:17:43 +0000 (0:00:00.630) 0:08:05.631 ******* 2026-02-23 20:18:06.001011 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:18:06.001033 | orchestrator | 2026-02-23 20:18:06.001061 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-23 20:18:06.001082 | orchestrator | Monday 23 February 2026 20:17:44 +0000 (0:00:00.749) 0:08:06.381 ******* 2026-02-23 20:18:06.001113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:18:06.001136 | orchestrator | 2026-02-23 20:18:06.001156 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-23 20:18:06.001177 | orchestrator | Monday 23 February 2026 20:17:45 +0000 (0:00:00.771) 0:08:07.152 ******* 2026-02-23 20:18:06.001197 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.001216 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.001236 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.001257 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.001343 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.001366 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.001526 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.001547 | orchestrator | 2026-02-23 20:18:06.001594 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-23 20:18:06.001615 | orchestrator | Monday 23 February 2026 20:17:54 +0000 (0:00:09.387) 0:08:16.540 ******* 2026-02-23 20:18:06.001632 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.001650 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.001667 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.001684 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.001702 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.001719 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.001737 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.001754 | orchestrator | 2026-02-23 20:18:06.001773 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-23 20:18:06.001791 | orchestrator | Monday 23 February 2026 20:17:55 +0000 (0:00:00.860) 0:08:17.400 ******* 2026-02-23 20:18:06.001809 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.001827 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.001902 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.001929 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.001948 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.001966 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.001983 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.002000 | orchestrator | 2026-02-23 20:18:06.002089 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-23 20:18:06.002113 | orchestrator | Monday 23 February 2026 20:17:56 +0000 (0:00:01.393) 0:08:18.794 ******* 2026-02-23 20:18:06.002129 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.002146 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.002163 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.002178 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.002194 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.002210 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.002225 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.002242 | orchestrator | 2026-02-23 20:18:06.002257 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-23 20:18:06.002274 | orchestrator | Monday 23 February 2026 20:17:58 +0000 (0:00:02.035) 0:08:20.829 ******* 2026-02-23 20:18:06.002290 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.002300 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.002309 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.002319 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.002328 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.002337 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.002347 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.002362 | orchestrator | 2026-02-23 20:18:06.002402 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-23 20:18:06.002418 | orchestrator | Monday 23 February 2026 20:17:59 +0000 (0:00:01.277) 0:08:22.107 ******* 2026-02-23 20:18:06.002434 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.002449 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.002465 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.002524 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.002541 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.002556 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.002571 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.002587 | orchestrator | 2026-02-23 20:18:06.002603 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-23 20:18:06.002619 | orchestrator | 2026-02-23 20:18:06.002635 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-23 20:18:06.002651 | orchestrator | Monday 23 February 2026 20:18:01 +0000 (0:00:01.306) 0:08:23.413 ******* 2026-02-23 20:18:06.002703 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:18:06.002722 | orchestrator | 2026-02-23 20:18:06.002738 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-23 20:18:06.002754 | orchestrator | Monday 23 February 2026 20:18:02 +0000 (0:00:00.922) 0:08:24.336 ******* 2026-02-23 20:18:06.002771 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:18:06.002800 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:18:06.002816 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:18:06.002833 | orchestrator | ok: [testbed-manager] 2026-02-23 20:18:06.002849 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:18:06.002865 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:18:06.002881 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:18:06.002897 | orchestrator | 2026-02-23 20:18:06.002913 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-23 20:18:06.002929 | orchestrator | Monday 23 February 2026 20:18:03 +0000 (0:00:00.847) 0:08:25.183 ******* 2026-02-23 20:18:06.002946 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:06.002962 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:06.002978 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:06.002994 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:06.003011 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:06.003026 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:06.003042 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:06.003058 | orchestrator | 2026-02-23 20:18:06.003112 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-23 20:18:06.003131 | orchestrator | Monday 23 February 2026 20:18:04 +0000 (0:00:01.141) 0:08:26.325 ******* 2026-02-23 20:18:06.003147 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:18:06.003162 | orchestrator | 2026-02-23 20:18:06.003178 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-23 20:18:06.003194 | orchestrator | Monday 23 February 2026 20:18:05 +0000 (0:00:00.983) 0:08:27.309 ******* 2026-02-23 20:18:06.003210 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:18:06.003226 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:18:06.003241 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:18:06.003257 | orchestrator | ok: [testbed-manager] 2026-02-23 20:18:06.003274 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:18:06.003290 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:18:06.003307 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:18:06.003324 | orchestrator | 2026-02-23 20:18:06.003362 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-23 20:18:07.440552 | orchestrator | Monday 23 February 2026 20:18:05 +0000 (0:00:00.822) 0:08:28.132 ******* 2026-02-23 20:18:07.440632 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:07.440641 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:07.440647 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:07.440653 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:07.440659 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:07.440664 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:07.440670 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:07.440675 | orchestrator | 2026-02-23 20:18:07.440682 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:18:07.440688 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-23 20:18:07.440696 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-23 20:18:07.440701 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-23 20:18:07.440724 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-23 20:18:07.440730 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-23 20:18:07.440735 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-23 20:18:07.440741 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-23 20:18:07.440746 | orchestrator | 2026-02-23 20:18:07.440751 | orchestrator | 2026-02-23 20:18:07.440757 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:18:07.440762 | orchestrator | Monday 23 February 2026 20:18:07 +0000 (0:00:01.087) 0:08:29.219 ******* 2026-02-23 20:18:07.440768 | orchestrator | =============================================================================== 2026-02-23 20:18:07.440773 | orchestrator | osism.commons.packages : Install required packages --------------------- 84.94s 2026-02-23 20:18:07.440779 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.69s 2026-02-23 20:18:07.440784 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.55s 2026-02-23 20:18:07.440789 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.41s 2026-02-23 20:18:07.440794 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.73s 2026-02-23 20:18:07.440800 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.38s 2026-02-23 20:18:07.440805 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.75s 2026-02-23 20:18:07.440812 | orchestrator | osism.services.rng : Install rng package ------------------------------- 10.25s 2026-02-23 20:18:07.440817 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.16s 2026-02-23 20:18:07.440822 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.14s 2026-02-23 20:18:07.440828 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.00s 2026-02-23 20:18:07.440844 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.66s 2026-02-23 20:18:07.440849 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.63s 2026-02-23 20:18:07.440855 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.39s 2026-02-23 20:18:07.440860 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.90s 2026-02-23 20:18:07.440866 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.37s 2026-02-23 20:18:07.440871 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 8.34s 2026-02-23 20:18:07.440876 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.02s 2026-02-23 20:18:07.440882 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.90s 2026-02-23 20:18:07.440887 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.70s 2026-02-23 20:18:07.759148 | orchestrator | + osism apply fail2ban 2026-02-23 20:18:20.402227 | orchestrator | 2026-02-23 20:18:20 | INFO  | Prepare task for execution of fail2ban. 2026-02-23 20:18:20.483134 | orchestrator | 2026-02-23 20:18:20 | INFO  | Task 1728dcdc-0f29-41b2-810c-db5976a904e3 (fail2ban) was prepared for execution. 2026-02-23 20:18:20.483256 | orchestrator | 2026-02-23 20:18:20 | INFO  | It takes a moment until task 1728dcdc-0f29-41b2-810c-db5976a904e3 (fail2ban) has been started and output is visible here. 2026-02-23 20:18:43.350490 | orchestrator | 2026-02-23 20:18:43.350595 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-23 20:18:43.350634 | orchestrator | 2026-02-23 20:18:43.350644 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-23 20:18:43.350655 | orchestrator | Monday 23 February 2026 20:18:25 +0000 (0:00:00.252) 0:00:00.252 ******* 2026-02-23 20:18:43.350667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:18:43.350679 | orchestrator | 2026-02-23 20:18:43.350689 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-23 20:18:43.350699 | orchestrator | Monday 23 February 2026 20:18:26 +0000 (0:00:01.113) 0:00:01.366 ******* 2026-02-23 20:18:43.350708 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:43.350720 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:43.350729 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:43.350738 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:43.350747 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:43.350757 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:43.350767 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:43.350776 | orchestrator | 2026-02-23 20:18:43.350786 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-23 20:18:43.350796 | orchestrator | Monday 23 February 2026 20:18:38 +0000 (0:00:12.250) 0:00:13.616 ******* 2026-02-23 20:18:43.350805 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:43.350814 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:43.350823 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:43.350832 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:43.350840 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:43.350849 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:43.350858 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:43.350867 | orchestrator | 2026-02-23 20:18:43.350876 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-23 20:18:43.350886 | orchestrator | Monday 23 February 2026 20:18:39 +0000 (0:00:01.519) 0:00:15.135 ******* 2026-02-23 20:18:43.350895 | orchestrator | ok: [testbed-manager] 2026-02-23 20:18:43.350906 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:18:43.350915 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:18:43.350925 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:18:43.350934 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:18:43.350943 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:18:43.350952 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:18:43.350962 | orchestrator | 2026-02-23 20:18:43.350971 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-23 20:18:43.350981 | orchestrator | Monday 23 February 2026 20:18:41 +0000 (0:00:01.460) 0:00:16.595 ******* 2026-02-23 20:18:43.350991 | orchestrator | changed: [testbed-manager] 2026-02-23 20:18:43.351001 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:18:43.351012 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:18:43.351021 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:18:43.351031 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:18:43.351041 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:18:43.351051 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:18:43.351061 | orchestrator | 2026-02-23 20:18:43.351071 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:18:43.351081 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351093 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351103 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351114 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351147 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351158 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351168 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:18:43.351177 | orchestrator | 2026-02-23 20:18:43.351187 | orchestrator | 2026-02-23 20:18:43.351197 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:18:43.351206 | orchestrator | Monday 23 February 2026 20:18:43 +0000 (0:00:01.620) 0:00:18.216 ******* 2026-02-23 20:18:43.351216 | orchestrator | =============================================================================== 2026-02-23 20:18:43.351226 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.25s 2026-02-23 20:18:43.351236 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-02-23 20:18:43.351245 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-02-23 20:18:43.351255 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.46s 2026-02-23 20:18:43.351266 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.11s 2026-02-23 20:18:43.648309 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-23 20:18:43.648426 | orchestrator | + osism apply network 2026-02-23 20:18:55.660988 | orchestrator | 2026-02-23 20:18:55 | INFO  | Prepare task for execution of network. 2026-02-23 20:18:55.733059 | orchestrator | 2026-02-23 20:18:55 | INFO  | Task 3efa6407-787d-4eb8-804e-96aba080962f (network) was prepared for execution. 2026-02-23 20:18:55.733159 | orchestrator | 2026-02-23 20:18:55 | INFO  | It takes a moment until task 3efa6407-787d-4eb8-804e-96aba080962f (network) has been started and output is visible here. 2026-02-23 20:19:24.153686 | orchestrator | 2026-02-23 20:19:24.154616 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-23 20:19:24.154670 | orchestrator | 2026-02-23 20:19:24.154695 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-23 20:19:24.154716 | orchestrator | Monday 23 February 2026 20:18:59 +0000 (0:00:00.224) 0:00:00.224 ******* 2026-02-23 20:19:24.154732 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.154744 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.154755 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.154765 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.154776 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.154787 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.154798 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.154808 | orchestrator | 2026-02-23 20:19:24.154819 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-23 20:19:24.154830 | orchestrator | Monday 23 February 2026 20:19:00 +0000 (0:00:00.570) 0:00:00.795 ******* 2026-02-23 20:19:24.154843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:19:24.154857 | orchestrator | 2026-02-23 20:19:24.154869 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-23 20:19:24.154879 | orchestrator | Monday 23 February 2026 20:19:01 +0000 (0:00:00.907) 0:00:01.702 ******* 2026-02-23 20:19:24.154890 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.154901 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.154911 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.154922 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.154933 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.154969 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.154981 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.154991 | orchestrator | 2026-02-23 20:19:24.155002 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-23 20:19:24.155012 | orchestrator | Monday 23 February 2026 20:19:03 +0000 (0:00:01.960) 0:00:03.662 ******* 2026-02-23 20:19:24.155023 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.155033 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.155044 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.155054 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.155064 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.155075 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.155085 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.155095 | orchestrator | 2026-02-23 20:19:24.155106 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-23 20:19:24.155117 | orchestrator | Monday 23 February 2026 20:19:05 +0000 (0:00:01.831) 0:00:05.493 ******* 2026-02-23 20:19:24.155127 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-23 20:19:24.155139 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-23 20:19:24.155150 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-23 20:19:24.155160 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-23 20:19:24.155171 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-23 20:19:24.155181 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-23 20:19:24.155192 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-23 20:19:24.155202 | orchestrator | 2026-02-23 20:19:24.155213 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-23 20:19:24.155224 | orchestrator | Monday 23 February 2026 20:19:06 +0000 (0:00:00.983) 0:00:06.477 ******* 2026-02-23 20:19:24.155234 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:19:24.155246 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:19:24.155257 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-23 20:19:24.155267 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-23 20:19:24.155277 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-23 20:19:24.155288 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:19:24.155298 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-23 20:19:24.155309 | orchestrator | 2026-02-23 20:19:24.155319 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-23 20:19:24.155330 | orchestrator | Monday 23 February 2026 20:19:09 +0000 (0:00:03.505) 0:00:09.982 ******* 2026-02-23 20:19:24.155341 | orchestrator | changed: [testbed-manager] 2026-02-23 20:19:24.155351 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:19:24.155362 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:19:24.155372 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:19:24.155382 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:19:24.155455 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:19:24.155473 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:19:24.155491 | orchestrator | 2026-02-23 20:19:24.155521 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-23 20:19:24.155532 | orchestrator | Monday 23 February 2026 20:19:11 +0000 (0:00:01.676) 0:00:11.659 ******* 2026-02-23 20:19:24.155544 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:19:24.155554 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:19:24.155565 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:19:24.155575 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-23 20:19:24.155585 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-23 20:19:24.155596 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-23 20:19:24.155607 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-23 20:19:24.155617 | orchestrator | 2026-02-23 20:19:24.155628 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-23 20:19:24.155638 | orchestrator | Monday 23 February 2026 20:19:13 +0000 (0:00:01.779) 0:00:13.438 ******* 2026-02-23 20:19:24.155658 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.155669 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.155680 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.155690 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.155700 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.155711 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.155721 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.155732 | orchestrator | 2026-02-23 20:19:24.155743 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-23 20:19:24.155776 | orchestrator | Monday 23 February 2026 20:19:14 +0000 (0:00:01.184) 0:00:14.623 ******* 2026-02-23 20:19:24.155840 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:24.155854 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:24.155865 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:24.155875 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:24.155886 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:24.155897 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:24.155907 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:24.155918 | orchestrator | 2026-02-23 20:19:24.155929 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-23 20:19:24.155940 | orchestrator | Monday 23 February 2026 20:19:14 +0000 (0:00:00.668) 0:00:15.292 ******* 2026-02-23 20:19:24.155950 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.155961 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.155971 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.155982 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.155993 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.156003 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.156043 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.156056 | orchestrator | 2026-02-23 20:19:24.156067 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-23 20:19:24.156078 | orchestrator | Monday 23 February 2026 20:19:17 +0000 (0:00:02.295) 0:00:17.587 ******* 2026-02-23 20:19:24.156089 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:24.156100 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:24.156110 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:24.156121 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:24.156132 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:24.156142 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:24.156154 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-23 20:19:24.156166 | orchestrator | 2026-02-23 20:19:24.156177 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-23 20:19:24.156188 | orchestrator | Monday 23 February 2026 20:19:18 +0000 (0:00:00.901) 0:00:18.489 ******* 2026-02-23 20:19:24.156199 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.156209 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:19:24.156220 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:19:24.156231 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:19:24.156242 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:19:24.156252 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:19:24.156263 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:19:24.156273 | orchestrator | 2026-02-23 20:19:24.156284 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-23 20:19:24.156295 | orchestrator | Monday 23 February 2026 20:19:19 +0000 (0:00:01.683) 0:00:20.173 ******* 2026-02-23 20:19:24.156306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:19:24.156319 | orchestrator | 2026-02-23 20:19:24.156330 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-23 20:19:24.156341 | orchestrator | Monday 23 February 2026 20:19:21 +0000 (0:00:01.227) 0:00:21.400 ******* 2026-02-23 20:19:24.156360 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.156371 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.156382 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.156420 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.156431 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.156442 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.156452 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.156463 | orchestrator | 2026-02-23 20:19:24.156474 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-23 20:19:24.156485 | orchestrator | Monday 23 February 2026 20:19:22 +0000 (0:00:00.997) 0:00:22.397 ******* 2026-02-23 20:19:24.156496 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:24.156527 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:24.156548 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:24.156559 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:24.156570 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:24.156580 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:24.156591 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:24.156601 | orchestrator | 2026-02-23 20:19:24.156612 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-23 20:19:24.156623 | orchestrator | Monday 23 February 2026 20:19:22 +0000 (0:00:00.838) 0:00:23.236 ******* 2026-02-23 20:19:24.156633 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156644 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156655 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156666 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156676 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156687 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156697 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156708 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156719 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156730 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156740 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156751 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-23 20:19:24.156762 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156772 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-23 20:19:24.156784 | orchestrator | 2026-02-23 20:19:24.156817 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-23 20:19:39.633295 | orchestrator | Monday 23 February 2026 20:19:24 +0000 (0:00:01.245) 0:00:24.482 ******* 2026-02-23 20:19:39.633451 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:39.633466 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:39.633476 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:39.633486 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:39.633495 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:39.633504 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:39.633514 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:39.633523 | orchestrator | 2026-02-23 20:19:39.633533 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-23 20:19:39.633544 | orchestrator | Monday 23 February 2026 20:19:24 +0000 (0:00:00.617) 0:00:25.100 ******* 2026-02-23 20:19:39.633556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:19:39.633591 | orchestrator | 2026-02-23 20:19:39.633601 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-23 20:19:39.633611 | orchestrator | Monday 23 February 2026 20:19:29 +0000 (0:00:04.733) 0:00:29.833 ******* 2026-02-23 20:19:39.633622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633632 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633664 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633777 | orchestrator | 2026-02-23 20:19:39.633782 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-23 20:19:39.633788 | orchestrator | Monday 23 February 2026 20:19:34 +0000 (0:00:05.321) 0:00:35.155 ******* 2026-02-23 20:19:39.633793 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633815 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-23 20:19:39.633842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:39.633879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:51.327063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-23 20:19:51.327181 | orchestrator | 2026-02-23 20:19:51.327200 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-23 20:19:51.327213 | orchestrator | Monday 23 February 2026 20:19:39 +0000 (0:00:04.950) 0:00:40.106 ******* 2026-02-23 20:19:51.327227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:19:51.327239 | orchestrator | 2026-02-23 20:19:51.327250 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-23 20:19:51.327262 | orchestrator | Monday 23 February 2026 20:19:40 +0000 (0:00:01.028) 0:00:41.134 ******* 2026-02-23 20:19:51.327273 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:51.327285 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:51.327296 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:51.327307 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:51.327318 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:51.327328 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:51.327339 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:51.327350 | orchestrator | 2026-02-23 20:19:51.327361 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-23 20:19:51.327372 | orchestrator | Monday 23 February 2026 20:19:41 +0000 (0:00:01.032) 0:00:42.167 ******* 2026-02-23 20:19:51.327383 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327491 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327507 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327518 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327529 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327540 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327551 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327561 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:51.327573 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327584 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327595 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327606 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327618 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327630 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:51.327643 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327673 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327685 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327698 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327733 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:51.327745 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327757 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327769 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327782 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327794 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:51.327806 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327818 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327830 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327842 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327854 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:51.327865 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:51.327877 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-23 20:19:51.327890 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-23 20:19:51.327902 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-23 20:19:51.327915 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-23 20:19:51.327927 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:51.327938 | orchestrator | 2026-02-23 20:19:51.327951 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-02-23 20:19:51.327982 | orchestrator | Monday 23 February 2026 20:19:42 +0000 (0:00:00.754) 0:00:42.921 ******* 2026-02-23 20:19:51.327994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:19:51.328006 | orchestrator | 2026-02-23 20:19:51.328017 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-02-23 20:19:51.328027 | orchestrator | Monday 23 February 2026 20:19:43 +0000 (0:00:01.075) 0:00:43.996 ******* 2026-02-23 20:19:51.328038 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:51.328049 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:51.328059 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:51.328070 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:51.328081 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:51.328091 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:51.328102 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:51.328112 | orchestrator | 2026-02-23 20:19:51.328123 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-02-23 20:19:51.328134 | orchestrator | Monday 23 February 2026 20:19:44 +0000 (0:00:00.555) 0:00:44.551 ******* 2026-02-23 20:19:51.328145 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:51.328155 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:51.328166 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:51.328176 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:51.328187 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:51.328198 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:51.328208 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:51.328219 | orchestrator | 2026-02-23 20:19:51.328229 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-02-23 20:19:51.328240 | orchestrator | Monday 23 February 2026 20:19:44 +0000 (0:00:00.652) 0:00:45.204 ******* 2026-02-23 20:19:51.328251 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:51.328283 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:51.328293 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:51.328304 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:51.328315 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:51.328325 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:51.328336 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:51.328347 | orchestrator | 2026-02-23 20:19:51.328358 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-02-23 20:19:51.328368 | orchestrator | Monday 23 February 2026 20:19:45 +0000 (0:00:00.542) 0:00:45.747 ******* 2026-02-23 20:19:51.328379 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:51.328390 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:51.328421 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:51.328432 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:51.328443 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:51.328454 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:51.328464 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:51.328475 | orchestrator | 2026-02-23 20:19:51.328486 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-02-23 20:19:51.328497 | orchestrator | Monday 23 February 2026 20:19:47 +0000 (0:00:01.799) 0:00:47.546 ******* 2026-02-23 20:19:51.328507 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:51.328518 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:51.328529 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:51.328539 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:51.328550 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:51.328560 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:51.328570 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:51.328581 | orchestrator | 2026-02-23 20:19:51.328592 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-02-23 20:19:51.328609 | orchestrator | Monday 23 February 2026 20:19:48 +0000 (0:00:00.917) 0:00:48.463 ******* 2026-02-23 20:19:51.328620 | orchestrator | ok: [testbed-manager] 2026-02-23 20:19:51.328631 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:19:51.328641 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:19:51.328652 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:19:51.328662 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:19:51.328673 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:19:51.328683 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:19:51.328694 | orchestrator | 2026-02-23 20:19:51.328705 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-23 20:19:51.328716 | orchestrator | Monday 23 February 2026 20:19:50 +0000 (0:00:02.002) 0:00:50.466 ******* 2026-02-23 20:19:51.328727 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:51.328737 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:51.328748 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:51.328759 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:51.328770 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:51.328780 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:51.328791 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:51.328802 | orchestrator | 2026-02-23 20:19:51.328812 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-23 20:19:51.328823 | orchestrator | Monday 23 February 2026 20:19:50 +0000 (0:00:00.690) 0:00:51.157 ******* 2026-02-23 20:19:51.328834 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:19:51.328844 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:19:51.328855 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:19:51.328866 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:19:51.328876 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:19:51.328887 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:19:51.328897 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:19:51.328908 | orchestrator | 2026-02-23 20:19:51.328919 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:19:51.328931 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-23 20:19:51.328950 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:19:51.328969 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:19:51.584721 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:19:51.584822 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:19:51.584835 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:19:51.584847 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:19:51.584858 | orchestrator | 2026-02-23 20:19:51.584870 | orchestrator | 2026-02-23 20:19:51.584881 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:19:51.584893 | orchestrator | Monday 23 February 2026 20:19:51 +0000 (0:00:00.496) 0:00:51.654 ******* 2026-02-23 20:19:51.584904 | orchestrator | =============================================================================== 2026-02-23 20:19:51.584915 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.32s 2026-02-23 20:19:51.584926 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.95s 2026-02-23 20:19:51.584936 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.73s 2026-02-23 20:19:51.584947 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.51s 2026-02-23 20:19:51.584958 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.30s 2026-02-23 20:19:51.584968 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.00s 2026-02-23 20:19:51.584979 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2026-02-23 20:19:51.584989 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2026-02-23 20:19:51.585000 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.80s 2026-02-23 20:19:51.585010 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.78s 2026-02-23 20:19:51.585021 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2026-02-23 20:19:51.585032 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2026-02-23 20:19:51.585042 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2026-02-23 20:19:51.585053 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2026-02-23 20:19:51.585063 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2026-02-23 20:19:51.585074 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.08s 2026-02-23 20:19:51.585084 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-02-23 20:19:51.585095 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.03s 2026-02-23 20:19:51.585106 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2026-02-23 20:19:51.585116 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2026-02-23 20:19:51.807108 | orchestrator | + osism apply wireguard 2026-02-23 20:20:03.659881 | orchestrator | 2026-02-23 20:20:03 | INFO  | Prepare task for execution of wireguard. 2026-02-23 20:20:03.738144 | orchestrator | 2026-02-23 20:20:03 | INFO  | Task c3135513-3545-428e-871e-bd228fb0c9c2 (wireguard) was prepared for execution. 2026-02-23 20:20:03.738295 | orchestrator | 2026-02-23 20:20:03 | INFO  | It takes a moment until task c3135513-3545-428e-871e-bd228fb0c9c2 (wireguard) has been started and output is visible here. 2026-02-23 20:20:21.233628 | orchestrator | 2026-02-23 20:20:21.233781 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-23 20:20:21.233811 | orchestrator | 2026-02-23 20:20:21.233831 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-23 20:20:21.233850 | orchestrator | Monday 23 February 2026 20:20:07 +0000 (0:00:00.179) 0:00:00.179 ******* 2026-02-23 20:20:21.233870 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:21.233889 | orchestrator | 2026-02-23 20:20:21.233908 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-23 20:20:21.233926 | orchestrator | Monday 23 February 2026 20:20:08 +0000 (0:00:01.234) 0:00:01.413 ******* 2026-02-23 20:20:21.233945 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.233963 | orchestrator | 2026-02-23 20:20:21.233980 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-23 20:20:21.234000 | orchestrator | Monday 23 February 2026 20:20:14 +0000 (0:00:05.460) 0:00:06.874 ******* 2026-02-23 20:20:21.234095 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.234112 | orchestrator | 2026-02-23 20:20:21.234123 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-23 20:20:21.234134 | orchestrator | Monday 23 February 2026 20:20:14 +0000 (0:00:00.486) 0:00:07.360 ******* 2026-02-23 20:20:21.234146 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.234158 | orchestrator | 2026-02-23 20:20:21.234170 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-23 20:20:21.234183 | orchestrator | Monday 23 February 2026 20:20:15 +0000 (0:00:00.380) 0:00:07.741 ******* 2026-02-23 20:20:21.234195 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:21.234207 | orchestrator | 2026-02-23 20:20:21.234219 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-23 20:20:21.234231 | orchestrator | Monday 23 February 2026 20:20:15 +0000 (0:00:00.571) 0:00:08.312 ******* 2026-02-23 20:20:21.234243 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:21.234255 | orchestrator | 2026-02-23 20:20:21.234268 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-23 20:20:21.234280 | orchestrator | Monday 23 February 2026 20:20:16 +0000 (0:00:00.384) 0:00:08.697 ******* 2026-02-23 20:20:21.234292 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:21.234304 | orchestrator | 2026-02-23 20:20:21.234316 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-23 20:20:21.234329 | orchestrator | Monday 23 February 2026 20:20:16 +0000 (0:00:00.363) 0:00:09.061 ******* 2026-02-23 20:20:21.234341 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.234354 | orchestrator | 2026-02-23 20:20:21.234366 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-23 20:20:21.234378 | orchestrator | Monday 23 February 2026 20:20:17 +0000 (0:00:01.062) 0:00:10.124 ******* 2026-02-23 20:20:21.234390 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-23 20:20:21.234404 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.234440 | orchestrator | 2026-02-23 20:20:21.234452 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-23 20:20:21.234465 | orchestrator | Monday 23 February 2026 20:20:18 +0000 (0:00:00.861) 0:00:10.985 ******* 2026-02-23 20:20:21.234499 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.234513 | orchestrator | 2026-02-23 20:20:21.234526 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-23 20:20:21.234539 | orchestrator | Monday 23 February 2026 20:20:20 +0000 (0:00:01.645) 0:00:12.630 ******* 2026-02-23 20:20:21.234549 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:21.234560 | orchestrator | 2026-02-23 20:20:21.234571 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:20:21.234607 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:20:21.234619 | orchestrator | 2026-02-23 20:20:21.234630 | orchestrator | 2026-02-23 20:20:21.234642 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:20:21.234652 | orchestrator | Monday 23 February 2026 20:20:20 +0000 (0:00:00.885) 0:00:13.515 ******* 2026-02-23 20:20:21.234663 | orchestrator | =============================================================================== 2026-02-23 20:20:21.234674 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.46s 2026-02-23 20:20:21.234685 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2026-02-23 20:20:21.234695 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.23s 2026-02-23 20:20:21.234706 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2026-02-23 20:20:21.234717 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2026-02-23 20:20:21.234728 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.86s 2026-02-23 20:20:21.234739 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.57s 2026-02-23 20:20:21.234749 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-02-23 20:20:21.234760 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-02-23 20:20:21.234776 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-02-23 20:20:21.234788 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.36s 2026-02-23 20:20:21.414571 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-23 20:20:21.443154 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-23 20:20:21.443249 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-23 20:20:21.523627 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 173 0 --:--:-- --:--:-- --:--:-- 175 2026-02-23 20:20:21.538489 | orchestrator | + osism apply --environment custom workarounds 2026-02-23 20:20:23.320643 | orchestrator | 2026-02-23 20:20:23 | INFO  | Trying to run play workarounds in environment custom 2026-02-23 20:20:33.371823 | orchestrator | 2026-02-23 20:20:33 | INFO  | Prepare task for execution of workarounds. 2026-02-23 20:20:33.451356 | orchestrator | 2026-02-23 20:20:33 | INFO  | Task faf8edc7-e35e-44ac-8d3b-86f690416762 (workarounds) was prepared for execution. 2026-02-23 20:20:33.451481 | orchestrator | 2026-02-23 20:20:33 | INFO  | It takes a moment until task faf8edc7-e35e-44ac-8d3b-86f690416762 (workarounds) has been started and output is visible here. 2026-02-23 20:20:57.320846 | orchestrator | 2026-02-23 20:20:57.320953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:20:57.320969 | orchestrator | 2026-02-23 20:20:57.320981 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-23 20:20:57.320992 | orchestrator | Monday 23 February 2026 20:20:37 +0000 (0:00:00.112) 0:00:00.112 ******* 2026-02-23 20:20:57.321004 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321015 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321026 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321037 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321048 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321058 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321070 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-23 20:20:57.321104 | orchestrator | 2026-02-23 20:20:57.321116 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-23 20:20:57.321127 | orchestrator | 2026-02-23 20:20:57.321138 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-23 20:20:57.321148 | orchestrator | Monday 23 February 2026 20:20:37 +0000 (0:00:00.689) 0:00:00.801 ******* 2026-02-23 20:20:57.321159 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:57.321171 | orchestrator | 2026-02-23 20:20:57.321182 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-23 20:20:57.321192 | orchestrator | 2026-02-23 20:20:57.321203 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-23 20:20:57.321214 | orchestrator | Monday 23 February 2026 20:20:39 +0000 (0:00:02.092) 0:00:02.893 ******* 2026-02-23 20:20:57.321225 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:20:57.321236 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:20:57.321246 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:20:57.321257 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:20:57.321267 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:20:57.321278 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:20:57.321288 | orchestrator | 2026-02-23 20:20:57.321299 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-23 20:20:57.321310 | orchestrator | 2026-02-23 20:20:57.321321 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-23 20:20:57.321331 | orchestrator | Monday 23 February 2026 20:20:41 +0000 (0:00:01.809) 0:00:04.703 ******* 2026-02-23 20:20:57.321343 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-23 20:20:57.321355 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-23 20:20:57.321365 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-23 20:20:57.321376 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-23 20:20:57.321387 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-23 20:20:57.321400 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-23 20:20:57.321412 | orchestrator | 2026-02-23 20:20:57.321456 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-23 20:20:57.321469 | orchestrator | Monday 23 February 2026 20:20:43 +0000 (0:00:01.433) 0:00:06.137 ******* 2026-02-23 20:20:57.321481 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:20:57.321494 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:20:57.321505 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:20:57.321517 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:20:57.321529 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:20:57.321541 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:20:57.321552 | orchestrator | 2026-02-23 20:20:57.321564 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-23 20:20:57.321576 | orchestrator | Monday 23 February 2026 20:20:46 +0000 (0:00:03.538) 0:00:09.675 ******* 2026-02-23 20:20:57.321588 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:20:57.321615 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:20:57.321627 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:20:57.321639 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:20:57.321650 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:20:57.321662 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:20:57.321674 | orchestrator | 2026-02-23 20:20:57.321685 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-23 20:20:57.321697 | orchestrator | 2026-02-23 20:20:57.321708 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-23 20:20:57.321721 | orchestrator | Monday 23 February 2026 20:20:47 +0000 (0:00:00.682) 0:00:10.357 ******* 2026-02-23 20:20:57.321741 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:20:57.321752 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:20:57.321762 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:20:57.321773 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:20:57.321783 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:20:57.321794 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:20:57.321804 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:57.321814 | orchestrator | 2026-02-23 20:20:57.321825 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-23 20:20:57.321836 | orchestrator | Monday 23 February 2026 20:20:48 +0000 (0:00:01.486) 0:00:11.844 ******* 2026-02-23 20:20:57.321847 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:20:57.321857 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:20:57.321868 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:20:57.321878 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:20:57.321889 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:20:57.321899 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:20:57.321927 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:57.321938 | orchestrator | 2026-02-23 20:20:57.321949 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-23 20:20:57.321960 | orchestrator | Monday 23 February 2026 20:20:50 +0000 (0:00:01.597) 0:00:13.441 ******* 2026-02-23 20:20:57.321971 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:20:57.321981 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:20:57.321992 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:20:57.322003 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:20:57.322061 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:20:57.322075 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:20:57.322086 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:57.322097 | orchestrator | 2026-02-23 20:20:57.322108 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-23 20:20:57.322119 | orchestrator | Monday 23 February 2026 20:20:52 +0000 (0:00:01.541) 0:00:14.983 ******* 2026-02-23 20:20:57.322129 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:20:57.322140 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:20:57.322151 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:20:57.322161 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:20:57.322172 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:20:57.322183 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:20:57.322193 | orchestrator | changed: [testbed-manager] 2026-02-23 20:20:57.322204 | orchestrator | 2026-02-23 20:20:57.322214 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-23 20:20:57.322225 | orchestrator | Monday 23 February 2026 20:20:53 +0000 (0:00:01.891) 0:00:16.875 ******* 2026-02-23 20:20:57.322236 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:20:57.322246 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:20:57.322257 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:20:57.322267 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:20:57.322278 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:20:57.322288 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:20:57.322299 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:20:57.322310 | orchestrator | 2026-02-23 20:20:57.322320 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-23 20:20:57.322331 | orchestrator | 2026-02-23 20:20:57.322342 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-23 20:20:57.322352 | orchestrator | Monday 23 February 2026 20:20:54 +0000 (0:00:00.628) 0:00:17.503 ******* 2026-02-23 20:20:57.322363 | orchestrator | ok: [testbed-manager] 2026-02-23 20:20:57.322374 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:20:57.322384 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:20:57.322395 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:20:57.322406 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:20:57.322438 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:20:57.322458 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:20:57.322469 | orchestrator | 2026-02-23 20:20:57.322480 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:20:57.322493 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:20:57.322505 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:20:57.322516 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:20:57.322527 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:20:57.322538 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:20:57.322549 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:20:57.322560 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:20:57.322571 | orchestrator | 2026-02-23 20:20:57.322582 | orchestrator | 2026-02-23 20:20:57.322600 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:20:57.322611 | orchestrator | Monday 23 February 2026 20:20:57 +0000 (0:00:02.699) 0:00:20.203 ******* 2026-02-23 20:20:57.322622 | orchestrator | =============================================================================== 2026-02-23 20:20:57.322633 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.54s 2026-02-23 20:20:57.322644 | orchestrator | Install python3-docker -------------------------------------------------- 2.70s 2026-02-23 20:20:57.322655 | orchestrator | Apply netplan configuration --------------------------------------------- 2.09s 2026-02-23 20:20:57.322666 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2026-02-23 20:20:57.322676 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2026-02-23 20:20:57.322687 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2026-02-23 20:20:57.322698 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2026-02-23 20:20:57.322709 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.49s 2026-02-23 20:20:57.322719 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.43s 2026-02-23 20:20:57.322730 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.69s 2026-02-23 20:20:57.322741 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.68s 2026-02-23 20:20:57.322759 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-02-23 20:20:57.881518 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-23 20:21:09.975223 | orchestrator | 2026-02-23 20:21:09 | INFO  | Prepare task for execution of reboot. 2026-02-23 20:21:10.047865 | orchestrator | 2026-02-23 20:21:10 | INFO  | Task 65109dc9-09c2-472b-aabf-58c9613b483e (reboot) was prepared for execution. 2026-02-23 20:21:10.047958 | orchestrator | 2026-02-23 20:21:10 | INFO  | It takes a moment until task 65109dc9-09c2-472b-aabf-58c9613b483e (reboot) has been started and output is visible here. 2026-02-23 20:21:19.357633 | orchestrator | 2026-02-23 20:21:19.357717 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-23 20:21:19.357727 | orchestrator | 2026-02-23 20:21:19.357735 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-23 20:21:19.357757 | orchestrator | Monday 23 February 2026 20:21:13 +0000 (0:00:00.152) 0:00:00.152 ******* 2026-02-23 20:21:19.357764 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:21:19.357771 | orchestrator | 2026-02-23 20:21:19.357777 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-23 20:21:19.357783 | orchestrator | Monday 23 February 2026 20:21:13 +0000 (0:00:00.088) 0:00:00.240 ******* 2026-02-23 20:21:19.357789 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:21:19.357795 | orchestrator | 2026-02-23 20:21:19.357801 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-23 20:21:19.357808 | orchestrator | Monday 23 February 2026 20:21:14 +0000 (0:00:00.945) 0:00:01.185 ******* 2026-02-23 20:21:19.357814 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:21:19.357820 | orchestrator | 2026-02-23 20:21:19.357826 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-23 20:21:19.357832 | orchestrator | 2026-02-23 20:21:19.357838 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-23 20:21:19.357844 | orchestrator | Monday 23 February 2026 20:21:14 +0000 (0:00:00.092) 0:00:01.277 ******* 2026-02-23 20:21:19.357850 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:21:19.357856 | orchestrator | 2026-02-23 20:21:19.357862 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-23 20:21:19.357868 | orchestrator | Monday 23 February 2026 20:21:14 +0000 (0:00:00.087) 0:00:01.365 ******* 2026-02-23 20:21:19.357874 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:21:19.357880 | orchestrator | 2026-02-23 20:21:19.357886 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-23 20:21:19.357892 | orchestrator | Monday 23 February 2026 20:21:15 +0000 (0:00:00.644) 0:00:02.010 ******* 2026-02-23 20:21:19.357898 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:21:19.357905 | orchestrator | 2026-02-23 20:21:19.357911 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-23 20:21:19.357917 | orchestrator | 2026-02-23 20:21:19.357923 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-23 20:21:19.357929 | orchestrator | Monday 23 February 2026 20:21:15 +0000 (0:00:00.096) 0:00:02.107 ******* 2026-02-23 20:21:19.357935 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:21:19.357941 | orchestrator | 2026-02-23 20:21:19.357947 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-23 20:21:19.357953 | orchestrator | Monday 23 February 2026 20:21:15 +0000 (0:00:00.152) 0:00:02.260 ******* 2026-02-23 20:21:19.357959 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:21:19.357965 | orchestrator | 2026-02-23 20:21:19.357971 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-23 20:21:19.357977 | orchestrator | Monday 23 February 2026 20:21:16 +0000 (0:00:00.654) 0:00:02.914 ******* 2026-02-23 20:21:19.357983 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:21:19.357989 | orchestrator | 2026-02-23 20:21:19.357995 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-23 20:21:19.358001 | orchestrator | 2026-02-23 20:21:19.358007 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-23 20:21:19.358048 | orchestrator | Monday 23 February 2026 20:21:16 +0000 (0:00:00.095) 0:00:03.010 ******* 2026-02-23 20:21:19.358055 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:21:19.358061 | orchestrator | 2026-02-23 20:21:19.358067 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-23 20:21:19.358082 | orchestrator | Monday 23 February 2026 20:21:16 +0000 (0:00:00.087) 0:00:03.097 ******* 2026-02-23 20:21:19.358088 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:21:19.358094 | orchestrator | 2026-02-23 20:21:19.358100 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-23 20:21:19.358106 | orchestrator | Monday 23 February 2026 20:21:17 +0000 (0:00:00.698) 0:00:03.796 ******* 2026-02-23 20:21:19.358112 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:21:19.358132 | orchestrator | 2026-02-23 20:21:19.358138 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-23 20:21:19.358144 | orchestrator | 2026-02-23 20:21:19.358150 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-23 20:21:19.358156 | orchestrator | Monday 23 February 2026 20:21:17 +0000 (0:00:00.108) 0:00:03.905 ******* 2026-02-23 20:21:19.358162 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:21:19.358168 | orchestrator | 2026-02-23 20:21:19.358174 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-23 20:21:19.358180 | orchestrator | Monday 23 February 2026 20:21:17 +0000 (0:00:00.079) 0:00:03.985 ******* 2026-02-23 20:21:19.358186 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:21:19.358192 | orchestrator | 2026-02-23 20:21:19.358199 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-23 20:21:19.358206 | orchestrator | Monday 23 February 2026 20:21:18 +0000 (0:00:00.679) 0:00:04.665 ******* 2026-02-23 20:21:19.358213 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:21:19.358219 | orchestrator | 2026-02-23 20:21:19.358226 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-23 20:21:19.358233 | orchestrator | 2026-02-23 20:21:19.358240 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-23 20:21:19.358247 | orchestrator | Monday 23 February 2026 20:21:18 +0000 (0:00:00.102) 0:00:04.767 ******* 2026-02-23 20:21:19.358254 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:21:19.358260 | orchestrator | 2026-02-23 20:21:19.358267 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-23 20:21:19.358274 | orchestrator | Monday 23 February 2026 20:21:18 +0000 (0:00:00.091) 0:00:04.858 ******* 2026-02-23 20:21:19.358281 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:21:19.358288 | orchestrator | 2026-02-23 20:21:19.358294 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-23 20:21:19.358301 | orchestrator | Monday 23 February 2026 20:21:19 +0000 (0:00:00.644) 0:00:05.503 ******* 2026-02-23 20:21:19.358320 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:21:19.358327 | orchestrator | 2026-02-23 20:21:19.358334 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:21:19.358343 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:21:19.358351 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:21:19.358358 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:21:19.358365 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:21:19.358371 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:21:19.358377 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:21:19.358383 | orchestrator | 2026-02-23 20:21:19.358389 | orchestrator | 2026-02-23 20:21:19.358395 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:21:19.358401 | orchestrator | Monday 23 February 2026 20:21:19 +0000 (0:00:00.035) 0:00:05.539 ******* 2026-02-23 20:21:19.358408 | orchestrator | =============================================================================== 2026-02-23 20:21:19.358414 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2026-02-23 20:21:19.358420 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2026-02-23 20:21:19.358442 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2026-02-23 20:21:19.564396 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-23 20:21:31.465017 | orchestrator | 2026-02-23 20:21:31 | INFO  | Prepare task for execution of wait-for-connection. 2026-02-23 20:21:31.537581 | orchestrator | 2026-02-23 20:21:31 | INFO  | Task 2b06b609-b1d1-4217-a5da-1ba3d0ec9394 (wait-for-connection) was prepared for execution. 2026-02-23 20:21:31.537654 | orchestrator | 2026-02-23 20:21:31 | INFO  | It takes a moment until task 2b06b609-b1d1-4217-a5da-1ba3d0ec9394 (wait-for-connection) has been started and output is visible here. 2026-02-23 20:21:47.531738 | orchestrator | 2026-02-23 20:21:47.531870 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-23 20:21:47.531898 | orchestrator | 2026-02-23 20:21:47.531919 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-23 20:21:47.531937 | orchestrator | Monday 23 February 2026 20:21:35 +0000 (0:00:00.205) 0:00:00.205 ******* 2026-02-23 20:21:47.531956 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:21:47.531972 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:21:47.531984 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:21:47.531995 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:21:47.532006 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:21:47.532034 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:21:47.532046 | orchestrator | 2026-02-23 20:21:47.532057 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:21:47.532068 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:21:47.532081 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:21:47.532092 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:21:47.532103 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:21:47.532114 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:21:47.532125 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:21:47.532136 | orchestrator | 2026-02-23 20:21:47.532147 | orchestrator | 2026-02-23 20:21:47.532158 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:21:47.532169 | orchestrator | Monday 23 February 2026 20:21:47 +0000 (0:00:11.517) 0:00:11.723 ******* 2026-02-23 20:21:47.532180 | orchestrator | =============================================================================== 2026-02-23 20:21:47.532191 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-02-23 20:21:47.836655 | orchestrator | + osism apply hddtemp 2026-02-23 20:21:59.840570 | orchestrator | 2026-02-23 20:21:59 | INFO  | Prepare task for execution of hddtemp. 2026-02-23 20:21:59.914300 | orchestrator | 2026-02-23 20:21:59 | INFO  | Task 9f52f2de-0ddc-422d-a209-80c304318b47 (hddtemp) was prepared for execution. 2026-02-23 20:21:59.914396 | orchestrator | 2026-02-23 20:21:59 | INFO  | It takes a moment until task 9f52f2de-0ddc-422d-a209-80c304318b47 (hddtemp) has been started and output is visible here. 2026-02-23 20:22:29.256885 | orchestrator | 2026-02-23 20:22:29.257020 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-23 20:22:29.257038 | orchestrator | 2026-02-23 20:22:29.257050 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-23 20:22:29.257062 | orchestrator | Monday 23 February 2026 20:22:03 +0000 (0:00:00.185) 0:00:00.185 ******* 2026-02-23 20:22:29.257137 | orchestrator | ok: [testbed-manager] 2026-02-23 20:22:29.257152 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:22:29.257164 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:22:29.257175 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:22:29.257186 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:22:29.257197 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:22:29.257208 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:22:29.257219 | orchestrator | 2026-02-23 20:22:29.257230 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-23 20:22:29.257241 | orchestrator | Monday 23 February 2026 20:22:04 +0000 (0:00:00.512) 0:00:00.698 ******* 2026-02-23 20:22:29.257254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:22:29.257267 | orchestrator | 2026-02-23 20:22:29.257279 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-23 20:22:29.257289 | orchestrator | Monday 23 February 2026 20:22:05 +0000 (0:00:00.980) 0:00:01.678 ******* 2026-02-23 20:22:29.257300 | orchestrator | ok: [testbed-manager] 2026-02-23 20:22:29.257311 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:22:29.257321 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:22:29.257333 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:22:29.257352 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:22:29.257372 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:22:29.257391 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:22:29.257409 | orchestrator | 2026-02-23 20:22:29.257429 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-23 20:22:29.257476 | orchestrator | Monday 23 February 2026 20:22:07 +0000 (0:00:01.869) 0:00:03.548 ******* 2026-02-23 20:22:29.257496 | orchestrator | changed: [testbed-manager] 2026-02-23 20:22:29.257516 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:22:29.257535 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:22:29.257553 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:22:29.257571 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:22:29.257585 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:22:29.257596 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:22:29.257607 | orchestrator | 2026-02-23 20:22:29.257618 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-23 20:22:29.257629 | orchestrator | Monday 23 February 2026 20:22:08 +0000 (0:00:01.025) 0:00:04.573 ******* 2026-02-23 20:22:29.257639 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:22:29.257650 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:22:29.257661 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:22:29.257671 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:22:29.257682 | orchestrator | ok: [testbed-manager] 2026-02-23 20:22:29.257692 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:22:29.257703 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:22:29.257713 | orchestrator | 2026-02-23 20:22:29.257724 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-23 20:22:29.257735 | orchestrator | Monday 23 February 2026 20:22:10 +0000 (0:00:01.978) 0:00:06.552 ******* 2026-02-23 20:22:29.257746 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:22:29.257757 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:22:29.257776 | orchestrator | changed: [testbed-manager] 2026-02-23 20:22:29.257794 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:22:29.257832 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:22:29.257852 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:22:29.257869 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:22:29.257887 | orchestrator | 2026-02-23 20:22:29.257907 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-23 20:22:29.257925 | orchestrator | Monday 23 February 2026 20:22:11 +0000 (0:00:00.790) 0:00:07.343 ******* 2026-02-23 20:22:29.257944 | orchestrator | changed: [testbed-manager] 2026-02-23 20:22:29.257955 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:22:29.257978 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:22:29.257989 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:22:29.258000 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:22:29.258011 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:22:29.258085 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:22:29.258096 | orchestrator | 2026-02-23 20:22:29.258108 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-23 20:22:29.258118 | orchestrator | Monday 23 February 2026 20:22:25 +0000 (0:00:14.215) 0:00:21.558 ******* 2026-02-23 20:22:29.258130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:22:29.258141 | orchestrator | 2026-02-23 20:22:29.258152 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-23 20:22:29.258162 | orchestrator | Monday 23 February 2026 20:22:26 +0000 (0:00:01.184) 0:00:22.743 ******* 2026-02-23 20:22:29.258173 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:22:29.258184 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:22:29.258194 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:22:29.258204 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:22:29.258215 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:22:29.258225 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:22:29.258236 | orchestrator | changed: [testbed-manager] 2026-02-23 20:22:29.258246 | orchestrator | 2026-02-23 20:22:29.258257 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:22:29.258268 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:22:29.258303 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:22:29.258315 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:22:29.258325 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:22:29.258336 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:22:29.258347 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:22:29.258358 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:22:29.258368 | orchestrator | 2026-02-23 20:22:29.258379 | orchestrator | 2026-02-23 20:22:29.258390 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:22:29.258400 | orchestrator | Monday 23 February 2026 20:22:28 +0000 (0:00:02.541) 0:00:25.285 ******* 2026-02-23 20:22:29.258412 | orchestrator | =============================================================================== 2026-02-23 20:22:29.258428 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.22s 2026-02-23 20:22:29.258471 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.54s 2026-02-23 20:22:29.258489 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.98s 2026-02-23 20:22:29.258506 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.87s 2026-02-23 20:22:29.258524 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2026-02-23 20:22:29.258543 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.03s 2026-02-23 20:22:29.258577 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.98s 2026-02-23 20:22:29.258597 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2026-02-23 20:22:29.258614 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.51s 2026-02-23 20:22:29.539017 | orchestrator | ++ semver latest 7.1.1 2026-02-23 20:22:29.577894 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 20:22:29.577988 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 20:22:29.578003 | orchestrator | + sudo systemctl restart manager.service 2026-02-23 20:22:46.096002 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-23 20:22:46.096142 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-23 20:22:46.096170 | orchestrator | + local max_attempts=60 2026-02-23 20:22:46.096191 | orchestrator | + local name=ceph-ansible 2026-02-23 20:22:46.096243 | orchestrator | + local attempt_num=1 2026-02-23 20:22:46.096256 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:22:46.131159 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:22:46.131266 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:22:46.131287 | orchestrator | + sleep 5 2026-02-23 20:22:51.135784 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:22:51.160887 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:22:51.160983 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:22:51.160997 | orchestrator | + sleep 5 2026-02-23 20:22:56.163761 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:22:56.200236 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:22:56.200364 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:22:56.200381 | orchestrator | + sleep 5 2026-02-23 20:23:01.204186 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:01.237314 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:01.237409 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:01.237423 | orchestrator | + sleep 5 2026-02-23 20:23:06.241827 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:06.282188 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:06.282288 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:06.282304 | orchestrator | + sleep 5 2026-02-23 20:23:11.286859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:11.321685 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:11.321803 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:11.321829 | orchestrator | + sleep 5 2026-02-23 20:23:16.326339 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:16.362713 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:16.362836 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:16.362860 | orchestrator | + sleep 5 2026-02-23 20:23:21.370202 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:21.415021 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:21.415099 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:21.415107 | orchestrator | + sleep 5 2026-02-23 20:23:26.418772 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:26.432657 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:26.432735 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:26.432745 | orchestrator | + sleep 5 2026-02-23 20:23:31.436235 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:31.471860 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:31.472009 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:31.472025 | orchestrator | + sleep 5 2026-02-23 20:23:36.476294 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:36.512828 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:36.512893 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:36.512898 | orchestrator | + sleep 5 2026-02-23 20:23:41.517849 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:41.555086 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:41.555182 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:41.555222 | orchestrator | + sleep 5 2026-02-23 20:23:46.559677 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:46.596781 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:46.596879 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-23 20:23:46.596893 | orchestrator | + sleep 5 2026-02-23 20:23:51.601327 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-23 20:23:51.632000 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:51.632087 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-23 20:23:51.632099 | orchestrator | + local max_attempts=60 2026-02-23 20:23:51.632114 | orchestrator | + local name=kolla-ansible 2026-02-23 20:23:51.632129 | orchestrator | + local attempt_num=1 2026-02-23 20:23:51.632897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-23 20:23:51.666909 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:51.667024 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-23 20:23:51.667219 | orchestrator | + local max_attempts=60 2026-02-23 20:23:51.667248 | orchestrator | + local name=osism-ansible 2026-02-23 20:23:51.667273 | orchestrator | + local attempt_num=1 2026-02-23 20:23:51.667315 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-23 20:23:51.701818 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-23 20:23:51.701901 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-23 20:23:51.701915 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-23 20:23:51.851942 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-23 20:23:51.996607 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-23 20:23:52.140619 | orchestrator | ARA in osism-ansible already disabled. 2026-02-23 20:23:52.294506 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-23 20:23:52.296085 | orchestrator | + osism apply gather-facts 2026-02-23 20:24:04.370189 | orchestrator | 2026-02-23 20:24:04 | INFO  | Prepare task for execution of gather-facts. 2026-02-23 20:24:04.433306 | orchestrator | 2026-02-23 20:24:04 | INFO  | Task 3e7967a8-a5c2-4f95-a1ee-923c0a89657f (gather-facts) was prepared for execution. 2026-02-23 20:24:04.433400 | orchestrator | 2026-02-23 20:24:04 | INFO  | It takes a moment until task 3e7967a8-a5c2-4f95-a1ee-923c0a89657f (gather-facts) has been started and output is visible here. 2026-02-23 20:24:17.350151 | orchestrator | 2026-02-23 20:24:17.350252 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-23 20:24:17.350267 | orchestrator | 2026-02-23 20:24:17.350278 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 20:24:17.350289 | orchestrator | Monday 23 February 2026 20:24:08 +0000 (0:00:00.164) 0:00:00.164 ******* 2026-02-23 20:24:17.350299 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:24:17.350309 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:24:17.350319 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:24:17.350329 | orchestrator | ok: [testbed-manager] 2026-02-23 20:24:17.350339 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:24:17.350348 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:24:17.350358 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:24:17.350367 | orchestrator | 2026-02-23 20:24:17.350378 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-23 20:24:17.350387 | orchestrator | 2026-02-23 20:24:17.350397 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-23 20:24:17.350407 | orchestrator | Monday 23 February 2026 20:24:16 +0000 (0:00:08.601) 0:00:08.765 ******* 2026-02-23 20:24:17.350417 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:24:17.350427 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:24:17.350437 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:24:17.350447 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:24:17.350457 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:24:17.350466 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:24:17.350539 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:24:17.350549 | orchestrator | 2026-02-23 20:24:17.350578 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:24:17.350593 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350625 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350636 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350645 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350655 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350665 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350674 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:24:17.350684 | orchestrator | 2026-02-23 20:24:17.350693 | orchestrator | 2026-02-23 20:24:17.350703 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:24:17.350713 | orchestrator | Monday 23 February 2026 20:24:17 +0000 (0:00:00.460) 0:00:09.226 ******* 2026-02-23 20:24:17.350722 | orchestrator | =============================================================================== 2026-02-23 20:24:17.350732 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.60s 2026-02-23 20:24:17.350741 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-02-23 20:24:17.566431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-23 20:24:17.579006 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-23 20:24:17.594544 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-23 20:24:17.603925 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-23 20:24:17.612691 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-23 20:24:17.632924 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-23 20:24:17.641961 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-23 20:24:17.651395 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-23 20:24:17.662625 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-23 20:24:17.672923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-23 20:24:17.688909 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-23 20:24:17.703900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-23 20:24:17.721449 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-23 20:24:17.733458 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-23 20:24:17.751140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-23 20:24:17.763859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-23 20:24:17.780157 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-23 20:24:17.792672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-23 20:24:17.808979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-23 20:24:17.825011 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-23 20:24:17.839056 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-23 20:24:17.857235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-23 20:24:17.872417 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-23 20:24:17.892714 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-23 20:24:18.377056 | orchestrator | ok: Runtime: 0:24:07.398525 2026-02-23 20:24:18.475647 | 2026-02-23 20:24:18.475796 | TASK [Deploy services] 2026-02-23 20:24:19.008240 | orchestrator | skipping: Conditional result was False 2026-02-23 20:24:19.025461 | 2026-02-23 20:24:19.025646 | TASK [Deploy in a nutshell] 2026-02-23 20:24:19.716508 | orchestrator | + set -e 2026-02-23 20:24:19.716632 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-23 20:24:19.716643 | orchestrator | ++ export INTERACTIVE=false 2026-02-23 20:24:19.716652 | orchestrator | ++ INTERACTIVE=false 2026-02-23 20:24:19.716658 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-23 20:24:19.716663 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-23 20:24:19.716669 | orchestrator | + source /opt/manager-vars.sh 2026-02-23 20:24:19.716690 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-23 20:24:19.716701 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-23 20:24:19.716707 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-23 20:24:19.716713 | orchestrator | ++ CEPH_VERSION=reef 2026-02-23 20:24:19.716717 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-23 20:24:19.716725 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-23 20:24:19.716729 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 20:24:19.716737 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 20:24:19.716749 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-23 20:24:19.716756 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-23 20:24:19.716760 | orchestrator | ++ export ARA=false 2026-02-23 20:24:19.716764 | orchestrator | ++ ARA=false 2026-02-23 20:24:19.716768 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-23 20:24:19.716772 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-23 20:24:19.716776 | orchestrator | ++ export TEMPEST=false 2026-02-23 20:24:19.716780 | orchestrator | ++ TEMPEST=false 2026-02-23 20:24:19.716784 | orchestrator | ++ export IS_ZUUL=true 2026-02-23 20:24:19.716788 | orchestrator | ++ IS_ZUUL=true 2026-02-23 20:24:19.716791 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:24:19.716969 | orchestrator | 2026-02-23 20:24:19.716976 | orchestrator | # PULL IMAGES 2026-02-23 20:24:19.716980 | orchestrator | 2026-02-23 20:24:19.716984 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 20:24:19.716988 | orchestrator | ++ export EXTERNAL_API=false 2026-02-23 20:24:19.716992 | orchestrator | ++ EXTERNAL_API=false 2026-02-23 20:24:19.716996 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-23 20:24:19.717001 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-23 20:24:19.717004 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-23 20:24:19.717008 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-23 20:24:19.717012 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-23 20:24:19.717020 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-23 20:24:19.717024 | orchestrator | + echo 2026-02-23 20:24:19.717028 | orchestrator | + echo '# PULL IMAGES' 2026-02-23 20:24:19.717032 | orchestrator | + echo 2026-02-23 20:24:19.717882 | orchestrator | ++ semver latest 7.0.0 2026-02-23 20:24:19.763904 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 20:24:19.763985 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 20:24:19.763997 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-23 20:24:21.510768 | orchestrator | 2026-02-23 20:24:21 | INFO  | Trying to run play pull-images in environment custom 2026-02-23 20:24:31.524370 | orchestrator | 2026-02-23 20:24:31 | INFO  | Prepare task for execution of pull-images. 2026-02-23 20:24:31.588836 | orchestrator | 2026-02-23 20:24:31 | INFO  | Task a1d36fd0-0fcb-4f7e-a6f8-3dfab783667e (pull-images) was prepared for execution. 2026-02-23 20:24:31.588936 | orchestrator | 2026-02-23 20:24:31 | INFO  | Task a1d36fd0-0fcb-4f7e-a6f8-3dfab783667e is running in background. No more output. Check ARA for logs. 2026-02-23 20:24:33.605988 | orchestrator | 2026-02-23 20:24:33 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-23 20:24:43.648325 | orchestrator | 2026-02-23 20:24:43 | INFO  | Prepare task for execution of wipe-partitions. 2026-02-23 20:24:43.715339 | orchestrator | 2026-02-23 20:24:43 | INFO  | Task 0e4c9b72-c35c-464b-bb79-4a8ea4fd0546 (wipe-partitions) was prepared for execution. 2026-02-23 20:24:43.715448 | orchestrator | 2026-02-23 20:24:43 | INFO  | It takes a moment until task 0e4c9b72-c35c-464b-bb79-4a8ea4fd0546 (wipe-partitions) has been started and output is visible here. 2026-02-23 20:24:55.595814 | orchestrator | 2026-02-23 20:24:55.596004 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-23 20:24:55.596025 | orchestrator | 2026-02-23 20:24:55.596039 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-23 20:24:55.596058 | orchestrator | Monday 23 February 2026 20:24:47 +0000 (0:00:00.154) 0:00:00.154 ******* 2026-02-23 20:24:55.596105 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:24:55.596119 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:24:55.596131 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:24:55.596143 | orchestrator | 2026-02-23 20:24:55.596155 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-23 20:24:55.596168 | orchestrator | Monday 23 February 2026 20:24:48 +0000 (0:00:00.585) 0:00:00.740 ******* 2026-02-23 20:24:55.596211 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:24:55.596226 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:24:55.596239 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:24:55.596251 | orchestrator | 2026-02-23 20:24:55.596262 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-23 20:24:55.596273 | orchestrator | Monday 23 February 2026 20:24:48 +0000 (0:00:00.358) 0:00:01.099 ******* 2026-02-23 20:24:55.596284 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:24:55.596296 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:24:55.596307 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:24:55.596318 | orchestrator | 2026-02-23 20:24:55.596329 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-23 20:24:55.596340 | orchestrator | Monday 23 February 2026 20:24:49 +0000 (0:00:00.563) 0:00:01.662 ******* 2026-02-23 20:24:55.596351 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:24:55.596362 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:24:55.596373 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:24:55.596384 | orchestrator | 2026-02-23 20:24:55.596394 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-23 20:24:55.596406 | orchestrator | Monday 23 February 2026 20:24:49 +0000 (0:00:00.246) 0:00:01.908 ******* 2026-02-23 20:24:55.596417 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-23 20:24:55.596432 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-23 20:24:55.596443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-23 20:24:55.596454 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-23 20:24:55.596465 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-23 20:24:55.596503 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-23 20:24:55.596525 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-23 20:24:55.596542 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-23 20:24:55.596558 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-23 20:24:55.596575 | orchestrator | 2026-02-23 20:24:55.596592 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-23 20:24:55.596608 | orchestrator | Monday 23 February 2026 20:24:50 +0000 (0:00:01.216) 0:00:03.125 ******* 2026-02-23 20:24:55.596624 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-23 20:24:55.596640 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-23 20:24:55.596657 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-23 20:24:55.596675 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-23 20:24:55.596693 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-23 20:24:55.596711 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-23 20:24:55.596730 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-23 20:24:55.596748 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-23 20:24:55.596766 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-23 20:24:55.596784 | orchestrator | 2026-02-23 20:24:55.596814 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-23 20:24:55.596832 | orchestrator | Monday 23 February 2026 20:24:52 +0000 (0:00:01.541) 0:00:04.666 ******* 2026-02-23 20:24:55.596851 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-23 20:24:55.596869 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-23 20:24:55.596888 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-23 20:24:55.596899 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-23 20:24:55.596998 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-23 20:24:55.597020 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-23 20:24:55.597039 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-23 20:24:55.597051 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-23 20:24:55.597062 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-23 20:24:55.597072 | orchestrator | 2026-02-23 20:24:55.597084 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-23 20:24:55.597095 | orchestrator | Monday 23 February 2026 20:24:54 +0000 (0:00:02.099) 0:00:06.766 ******* 2026-02-23 20:24:55.597106 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:24:55.597117 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:24:55.597127 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:24:55.597138 | orchestrator | 2026-02-23 20:24:55.597149 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-23 20:24:55.597160 | orchestrator | Monday 23 February 2026 20:24:54 +0000 (0:00:00.573) 0:00:07.339 ******* 2026-02-23 20:24:55.597171 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:24:55.597181 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:24:55.597192 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:24:55.597204 | orchestrator | 2026-02-23 20:24:55.597216 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:24:55.597228 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:24:55.597240 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:24:55.597272 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:24:55.597284 | orchestrator | 2026-02-23 20:24:55.597295 | orchestrator | 2026-02-23 20:24:55.597306 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:24:55.597317 | orchestrator | Monday 23 February 2026 20:24:55 +0000 (0:00:00.599) 0:00:07.939 ******* 2026-02-23 20:24:55.597328 | orchestrator | =============================================================================== 2026-02-23 20:24:55.597339 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.10s 2026-02-23 20:24:55.597350 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.54s 2026-02-23 20:24:55.597361 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2026-02-23 20:24:55.597372 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-02-23 20:24:55.597383 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-02-23 20:24:55.597394 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-02-23 20:24:55.597405 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-02-23 20:24:55.597416 | orchestrator | Remove all rook related logical devices --------------------------------- 0.36s 2026-02-23 20:24:55.597427 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-02-23 20:25:07.575729 | orchestrator | 2026-02-23 20:25:07 | INFO  | Prepare task for execution of facts. 2026-02-23 20:25:07.641066 | orchestrator | 2026-02-23 20:25:07 | INFO  | Task 3a19d283-4b12-495f-863c-d6d7f82ffc06 (facts) was prepared for execution. 2026-02-23 20:25:07.641204 | orchestrator | 2026-02-23 20:25:07 | INFO  | It takes a moment until task 3a19d283-4b12-495f-863c-d6d7f82ffc06 (facts) has been started and output is visible here. 2026-02-23 20:25:19.299876 | orchestrator | 2026-02-23 20:25:19.299997 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-23 20:25:19.300016 | orchestrator | 2026-02-23 20:25:19.300055 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-23 20:25:19.300067 | orchestrator | Monday 23 February 2026 20:25:11 +0000 (0:00:00.207) 0:00:00.207 ******* 2026-02-23 20:25:19.300079 | orchestrator | ok: [testbed-manager] 2026-02-23 20:25:19.300091 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:25:19.300102 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:25:19.300113 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:25:19.300124 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:25:19.300135 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:25:19.300146 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:25:19.300157 | orchestrator | 2026-02-23 20:25:19.300168 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-23 20:25:19.300179 | orchestrator | Monday 23 February 2026 20:25:12 +0000 (0:00:00.890) 0:00:01.097 ******* 2026-02-23 20:25:19.300190 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:25:19.300201 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:25:19.300212 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:25:19.300223 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:25:19.300233 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:19.300244 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:19.300255 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:25:19.300266 | orchestrator | 2026-02-23 20:25:19.300277 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-23 20:25:19.300309 | orchestrator | 2026-02-23 20:25:19.300321 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 20:25:19.300333 | orchestrator | Monday 23 February 2026 20:25:13 +0000 (0:00:01.052) 0:00:02.149 ******* 2026-02-23 20:25:19.300344 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:25:19.300355 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:25:19.300366 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:25:19.300376 | orchestrator | ok: [testbed-manager] 2026-02-23 20:25:19.300390 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:25:19.300408 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:25:19.300425 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:25:19.300442 | orchestrator | 2026-02-23 20:25:19.300462 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-23 20:25:19.300516 | orchestrator | 2026-02-23 20:25:19.300537 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-23 20:25:19.300559 | orchestrator | Monday 23 February 2026 20:25:18 +0000 (0:00:04.752) 0:00:06.902 ******* 2026-02-23 20:25:19.300578 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:25:19.300597 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:25:19.300610 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:25:19.300622 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:25:19.300634 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:19.300646 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:19.300657 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:25:19.300670 | orchestrator | 2026-02-23 20:25:19.300682 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:25:19.300695 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300709 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300722 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300734 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300746 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300771 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300797 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:25:19.300809 | orchestrator | 2026-02-23 20:25:19.300822 | orchestrator | 2026-02-23 20:25:19.300833 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:25:19.300844 | orchestrator | Monday 23 February 2026 20:25:18 +0000 (0:00:00.512) 0:00:07.415 ******* 2026-02-23 20:25:19.300855 | orchestrator | =============================================================================== 2026-02-23 20:25:19.300866 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.75s 2026-02-23 20:25:19.300877 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-02-23 20:25:19.300888 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.89s 2026-02-23 20:25:19.300899 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-02-23 20:25:21.596354 | orchestrator | 2026-02-23 20:25:21 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-02-23 20:25:21.656630 | orchestrator | 2026-02-23 20:25:21 | INFO  | Task 874c2ab0-7a4b-4bcc-aab9-aad0b1365dde (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-23 20:25:21.656722 | orchestrator | 2026-02-23 20:25:21 | INFO  | It takes a moment until task 874c2ab0-7a4b-4bcc-aab9-aad0b1365dde (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-23 20:25:33.163876 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-23 20:25:33.163970 | orchestrator | 2.16.14 2026-02-23 20:25:33.163989 | orchestrator | 2026-02-23 20:25:33.164001 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-23 20:25:33.164013 | orchestrator | 2026-02-23 20:25:33.164024 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 20:25:33.164035 | orchestrator | Monday 23 February 2026 20:25:25 +0000 (0:00:00.311) 0:00:00.311 ******* 2026-02-23 20:25:33.164045 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 20:25:33.164056 | orchestrator | 2026-02-23 20:25:33.164067 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-23 20:25:33.164078 | orchestrator | Monday 23 February 2026 20:25:26 +0000 (0:00:00.271) 0:00:00.582 ******* 2026-02-23 20:25:33.164089 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:25:33.164099 | orchestrator | 2026-02-23 20:25:33.164110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164121 | orchestrator | Monday 23 February 2026 20:25:26 +0000 (0:00:00.225) 0:00:00.808 ******* 2026-02-23 20:25:33.164142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-23 20:25:33.164152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-23 20:25:33.164164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-23 20:25:33.164171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-23 20:25:33.164182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-23 20:25:33.164193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-23 20:25:33.164204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-23 20:25:33.164215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-23 20:25:33.164226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-23 20:25:33.164234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-23 20:25:33.164257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-23 20:25:33.164264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-23 20:25:33.164270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-23 20:25:33.164276 | orchestrator | 2026-02-23 20:25:33.164282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164289 | orchestrator | Monday 23 February 2026 20:25:26 +0000 (0:00:00.451) 0:00:01.259 ******* 2026-02-23 20:25:33.164295 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164301 | orchestrator | 2026-02-23 20:25:33.164307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164314 | orchestrator | Monday 23 February 2026 20:25:27 +0000 (0:00:00.191) 0:00:01.451 ******* 2026-02-23 20:25:33.164320 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164326 | orchestrator | 2026-02-23 20:25:33.164332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164342 | orchestrator | Monday 23 February 2026 20:25:27 +0000 (0:00:00.196) 0:00:01.647 ******* 2026-02-23 20:25:33.164348 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164354 | orchestrator | 2026-02-23 20:25:33.164361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164367 | orchestrator | Monday 23 February 2026 20:25:27 +0000 (0:00:00.197) 0:00:01.845 ******* 2026-02-23 20:25:33.164373 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164379 | orchestrator | 2026-02-23 20:25:33.164386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164392 | orchestrator | Monday 23 February 2026 20:25:27 +0000 (0:00:00.191) 0:00:02.036 ******* 2026-02-23 20:25:33.164398 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164404 | orchestrator | 2026-02-23 20:25:33.164412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164418 | orchestrator | Monday 23 February 2026 20:25:27 +0000 (0:00:00.212) 0:00:02.249 ******* 2026-02-23 20:25:33.164425 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164432 | orchestrator | 2026-02-23 20:25:33.164439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164446 | orchestrator | Monday 23 February 2026 20:25:28 +0000 (0:00:00.218) 0:00:02.468 ******* 2026-02-23 20:25:33.164453 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164460 | orchestrator | 2026-02-23 20:25:33.164467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164474 | orchestrator | Monday 23 February 2026 20:25:28 +0000 (0:00:00.203) 0:00:02.672 ******* 2026-02-23 20:25:33.164535 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164549 | orchestrator | 2026-02-23 20:25:33.164560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164572 | orchestrator | Monday 23 February 2026 20:25:28 +0000 (0:00:00.213) 0:00:02.885 ******* 2026-02-23 20:25:33.164582 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba) 2026-02-23 20:25:33.164594 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba) 2026-02-23 20:25:33.164605 | orchestrator | 2026-02-23 20:25:33.164613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164633 | orchestrator | Monday 23 February 2026 20:25:28 +0000 (0:00:00.399) 0:00:03.285 ******* 2026-02-23 20:25:33.164639 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da) 2026-02-23 20:25:33.164646 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da) 2026-02-23 20:25:33.164652 | orchestrator | 2026-02-23 20:25:33.164664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164677 | orchestrator | Monday 23 February 2026 20:25:29 +0000 (0:00:00.623) 0:00:03.908 ******* 2026-02-23 20:25:33.164683 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4) 2026-02-23 20:25:33.164689 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4) 2026-02-23 20:25:33.164696 | orchestrator | 2026-02-23 20:25:33.164702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164708 | orchestrator | Monday 23 February 2026 20:25:30 +0000 (0:00:00.626) 0:00:04.535 ******* 2026-02-23 20:25:33.164714 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9) 2026-02-23 20:25:33.164721 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9) 2026-02-23 20:25:33.164727 | orchestrator | 2026-02-23 20:25:33.164733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:33.164739 | orchestrator | Monday 23 February 2026 20:25:31 +0000 (0:00:00.849) 0:00:05.384 ******* 2026-02-23 20:25:33.164746 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-23 20:25:33.164752 | orchestrator | 2026-02-23 20:25:33.164758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164764 | orchestrator | Monday 23 February 2026 20:25:31 +0000 (0:00:00.347) 0:00:05.732 ******* 2026-02-23 20:25:33.164771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-23 20:25:33.164777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-23 20:25:33.164783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-23 20:25:33.164789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-23 20:25:33.164796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-23 20:25:33.164802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-23 20:25:33.164808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-23 20:25:33.164814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-23 20:25:33.164821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-23 20:25:33.164827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-23 20:25:33.164833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-23 20:25:33.164840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-23 20:25:33.164846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-23 20:25:33.164852 | orchestrator | 2026-02-23 20:25:33.164858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164864 | orchestrator | Monday 23 February 2026 20:25:31 +0000 (0:00:00.379) 0:00:06.112 ******* 2026-02-23 20:25:33.164871 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164877 | orchestrator | 2026-02-23 20:25:33.164883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164889 | orchestrator | Monday 23 February 2026 20:25:31 +0000 (0:00:00.197) 0:00:06.309 ******* 2026-02-23 20:25:33.164896 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164902 | orchestrator | 2026-02-23 20:25:33.164908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164914 | orchestrator | Monday 23 February 2026 20:25:32 +0000 (0:00:00.205) 0:00:06.515 ******* 2026-02-23 20:25:33.164921 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164932 | orchestrator | 2026-02-23 20:25:33.164939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164945 | orchestrator | Monday 23 February 2026 20:25:32 +0000 (0:00:00.199) 0:00:06.715 ******* 2026-02-23 20:25:33.164951 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164957 | orchestrator | 2026-02-23 20:25:33.164964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164970 | orchestrator | Monday 23 February 2026 20:25:32 +0000 (0:00:00.198) 0:00:06.914 ******* 2026-02-23 20:25:33.164976 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.164982 | orchestrator | 2026-02-23 20:25:33.164988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.164995 | orchestrator | Monday 23 February 2026 20:25:32 +0000 (0:00:00.196) 0:00:07.110 ******* 2026-02-23 20:25:33.165001 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.165007 | orchestrator | 2026-02-23 20:25:33.165013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:33.165022 | orchestrator | Monday 23 February 2026 20:25:32 +0000 (0:00:00.201) 0:00:07.312 ******* 2026-02-23 20:25:33.165033 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:33.165042 | orchestrator | 2026-02-23 20:25:33.165058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:40.731027 | orchestrator | Monday 23 February 2026 20:25:33 +0000 (0:00:00.202) 0:00:07.514 ******* 2026-02-23 20:25:40.731131 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731147 | orchestrator | 2026-02-23 20:25:40.731155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:40.731162 | orchestrator | Monday 23 February 2026 20:25:33 +0000 (0:00:00.207) 0:00:07.721 ******* 2026-02-23 20:25:40.731172 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-23 20:25:40.731183 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-23 20:25:40.731194 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-23 20:25:40.731204 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-23 20:25:40.731215 | orchestrator | 2026-02-23 20:25:40.731226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:40.731255 | orchestrator | Monday 23 February 2026 20:25:34 +0000 (0:00:00.998) 0:00:08.719 ******* 2026-02-23 20:25:40.731266 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731276 | orchestrator | 2026-02-23 20:25:40.731287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:40.731298 | orchestrator | Monday 23 February 2026 20:25:34 +0000 (0:00:00.192) 0:00:08.912 ******* 2026-02-23 20:25:40.731308 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731319 | orchestrator | 2026-02-23 20:25:40.731329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:40.731340 | orchestrator | Monday 23 February 2026 20:25:34 +0000 (0:00:00.197) 0:00:09.109 ******* 2026-02-23 20:25:40.731350 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731360 | orchestrator | 2026-02-23 20:25:40.731370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:40.731380 | orchestrator | Monday 23 February 2026 20:25:34 +0000 (0:00:00.197) 0:00:09.307 ******* 2026-02-23 20:25:40.731391 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731401 | orchestrator | 2026-02-23 20:25:40.731411 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-23 20:25:40.731421 | orchestrator | Monday 23 February 2026 20:25:35 +0000 (0:00:00.201) 0:00:09.508 ******* 2026-02-23 20:25:40.731431 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-23 20:25:40.731441 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-23 20:25:40.731451 | orchestrator | 2026-02-23 20:25:40.731461 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-23 20:25:40.731472 | orchestrator | Monday 23 February 2026 20:25:35 +0000 (0:00:00.177) 0:00:09.685 ******* 2026-02-23 20:25:40.731555 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731568 | orchestrator | 2026-02-23 20:25:40.731579 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-23 20:25:40.731590 | orchestrator | Monday 23 February 2026 20:25:35 +0000 (0:00:00.141) 0:00:09.827 ******* 2026-02-23 20:25:40.731601 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731612 | orchestrator | 2026-02-23 20:25:40.731623 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-23 20:25:40.731635 | orchestrator | Monday 23 February 2026 20:25:35 +0000 (0:00:00.131) 0:00:09.959 ******* 2026-02-23 20:25:40.731646 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731656 | orchestrator | 2026-02-23 20:25:40.731668 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-23 20:25:40.731679 | orchestrator | Monday 23 February 2026 20:25:35 +0000 (0:00:00.126) 0:00:10.085 ******* 2026-02-23 20:25:40.731690 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:25:40.731701 | orchestrator | 2026-02-23 20:25:40.731712 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-23 20:25:40.731723 | orchestrator | Monday 23 February 2026 20:25:35 +0000 (0:00:00.152) 0:00:10.238 ******* 2026-02-23 20:25:40.731734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16360c2d-86c0-538a-b982-f32cf88f5f8a'}}) 2026-02-23 20:25:40.731746 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fef89255-3917-5f7c-b809-8ef443377219'}}) 2026-02-23 20:25:40.731756 | orchestrator | 2026-02-23 20:25:40.731766 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-23 20:25:40.731776 | orchestrator | Monday 23 February 2026 20:25:36 +0000 (0:00:00.166) 0:00:10.405 ******* 2026-02-23 20:25:40.731788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16360c2d-86c0-538a-b982-f32cf88f5f8a'}})  2026-02-23 20:25:40.731806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fef89255-3917-5f7c-b809-8ef443377219'}})  2026-02-23 20:25:40.731822 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731833 | orchestrator | 2026-02-23 20:25:40.731843 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-23 20:25:40.731854 | orchestrator | Monday 23 February 2026 20:25:36 +0000 (0:00:00.129) 0:00:10.535 ******* 2026-02-23 20:25:40.731864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16360c2d-86c0-538a-b982-f32cf88f5f8a'}})  2026-02-23 20:25:40.731875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fef89255-3917-5f7c-b809-8ef443377219'}})  2026-02-23 20:25:40.731885 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731896 | orchestrator | 2026-02-23 20:25:40.731906 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-23 20:25:40.731916 | orchestrator | Monday 23 February 2026 20:25:36 +0000 (0:00:00.317) 0:00:10.853 ******* 2026-02-23 20:25:40.731927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16360c2d-86c0-538a-b982-f32cf88f5f8a'}})  2026-02-23 20:25:40.731955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fef89255-3917-5f7c-b809-8ef443377219'}})  2026-02-23 20:25:40.731966 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.731976 | orchestrator | 2026-02-23 20:25:40.731987 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-23 20:25:40.731997 | orchestrator | Monday 23 February 2026 20:25:36 +0000 (0:00:00.157) 0:00:11.010 ******* 2026-02-23 20:25:40.732007 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:25:40.732018 | orchestrator | 2026-02-23 20:25:40.732028 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-23 20:25:40.732038 | orchestrator | Monday 23 February 2026 20:25:36 +0000 (0:00:00.140) 0:00:11.151 ******* 2026-02-23 20:25:40.732049 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:25:40.732067 | orchestrator | 2026-02-23 20:25:40.732078 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-23 20:25:40.732088 | orchestrator | Monday 23 February 2026 20:25:36 +0000 (0:00:00.144) 0:00:11.296 ******* 2026-02-23 20:25:40.732099 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.732110 | orchestrator | 2026-02-23 20:25:40.732120 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-23 20:25:40.732131 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.126) 0:00:11.422 ******* 2026-02-23 20:25:40.732141 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.732152 | orchestrator | 2026-02-23 20:25:40.732162 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-23 20:25:40.732173 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.137) 0:00:11.559 ******* 2026-02-23 20:25:40.732183 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.732193 | orchestrator | 2026-02-23 20:25:40.732204 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-23 20:25:40.732214 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.152) 0:00:11.712 ******* 2026-02-23 20:25:40.732225 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:25:40.732235 | orchestrator |  "ceph_osd_devices": { 2026-02-23 20:25:40.732246 | orchestrator |  "sdb": { 2026-02-23 20:25:40.732257 | orchestrator |  "osd_lvm_uuid": "16360c2d-86c0-538a-b982-f32cf88f5f8a" 2026-02-23 20:25:40.732268 | orchestrator |  }, 2026-02-23 20:25:40.732279 | orchestrator |  "sdc": { 2026-02-23 20:25:40.732289 | orchestrator |  "osd_lvm_uuid": "fef89255-3917-5f7c-b809-8ef443377219" 2026-02-23 20:25:40.732300 | orchestrator |  } 2026-02-23 20:25:40.732310 | orchestrator |  } 2026-02-23 20:25:40.732321 | orchestrator | } 2026-02-23 20:25:40.732332 | orchestrator | 2026-02-23 20:25:40.732343 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-23 20:25:40.732354 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.158) 0:00:11.870 ******* 2026-02-23 20:25:40.732364 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.732374 | orchestrator | 2026-02-23 20:25:40.732385 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-23 20:25:40.732396 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.138) 0:00:12.008 ******* 2026-02-23 20:25:40.732405 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.732416 | orchestrator | 2026-02-23 20:25:40.732427 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-23 20:25:40.732437 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.139) 0:00:12.148 ******* 2026-02-23 20:25:40.732447 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:25:40.732457 | orchestrator | 2026-02-23 20:25:40.732468 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-23 20:25:40.732478 | orchestrator | Monday 23 February 2026 20:25:37 +0000 (0:00:00.145) 0:00:12.293 ******* 2026-02-23 20:25:40.732610 | orchestrator | changed: [testbed-node-3] => { 2026-02-23 20:25:40.732625 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-23 20:25:40.732631 | orchestrator |  "ceph_osd_devices": { 2026-02-23 20:25:40.732638 | orchestrator |  "sdb": { 2026-02-23 20:25:40.732644 | orchestrator |  "osd_lvm_uuid": "16360c2d-86c0-538a-b982-f32cf88f5f8a" 2026-02-23 20:25:40.732650 | orchestrator |  }, 2026-02-23 20:25:40.732657 | orchestrator |  "sdc": { 2026-02-23 20:25:40.732663 | orchestrator |  "osd_lvm_uuid": "fef89255-3917-5f7c-b809-8ef443377219" 2026-02-23 20:25:40.732669 | orchestrator |  } 2026-02-23 20:25:40.732676 | orchestrator |  }, 2026-02-23 20:25:40.732682 | orchestrator |  "lvm_volumes": [ 2026-02-23 20:25:40.732688 | orchestrator |  { 2026-02-23 20:25:40.732695 | orchestrator |  "data": "osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a", 2026-02-23 20:25:40.732701 | orchestrator |  "data_vg": "ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a" 2026-02-23 20:25:40.732715 | orchestrator |  }, 2026-02-23 20:25:40.732722 | orchestrator |  { 2026-02-23 20:25:40.732728 | orchestrator |  "data": "osd-block-fef89255-3917-5f7c-b809-8ef443377219", 2026-02-23 20:25:40.732735 | orchestrator |  "data_vg": "ceph-fef89255-3917-5f7c-b809-8ef443377219" 2026-02-23 20:25:40.732741 | orchestrator |  } 2026-02-23 20:25:40.732747 | orchestrator |  ] 2026-02-23 20:25:40.732753 | orchestrator |  } 2026-02-23 20:25:40.732760 | orchestrator | } 2026-02-23 20:25:40.732766 | orchestrator | 2026-02-23 20:25:40.732772 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-23 20:25:40.732779 | orchestrator | Monday 23 February 2026 20:25:38 +0000 (0:00:00.426) 0:00:12.720 ******* 2026-02-23 20:25:40.732785 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 20:25:40.732791 | orchestrator | 2026-02-23 20:25:40.732797 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-23 20:25:40.732804 | orchestrator | 2026-02-23 20:25:40.732810 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 20:25:40.732816 | orchestrator | Monday 23 February 2026 20:25:40 +0000 (0:00:01.846) 0:00:14.566 ******* 2026-02-23 20:25:40.732822 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-23 20:25:40.732828 | orchestrator | 2026-02-23 20:25:40.732835 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-23 20:25:40.732841 | orchestrator | Monday 23 February 2026 20:25:40 +0000 (0:00:00.285) 0:00:14.852 ******* 2026-02-23 20:25:40.732847 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:25:40.732853 | orchestrator | 2026-02-23 20:25:40.732869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.754793 | orchestrator | Monday 23 February 2026 20:25:40 +0000 (0:00:00.234) 0:00:15.087 ******* 2026-02-23 20:25:48.754886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-23 20:25:48.754896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-23 20:25:48.754904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-23 20:25:48.754912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-23 20:25:48.754920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-23 20:25:48.754928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-23 20:25:48.754936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-23 20:25:48.754947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-23 20:25:48.754955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-23 20:25:48.754964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-23 20:25:48.754972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-23 20:25:48.754979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-23 20:25:48.755002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-23 20:25:48.755010 | orchestrator | 2026-02-23 20:25:48.755019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755027 | orchestrator | Monday 23 February 2026 20:25:41 +0000 (0:00:00.387) 0:00:15.475 ******* 2026-02-23 20:25:48.755035 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755043 | orchestrator | 2026-02-23 20:25:48.755051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755058 | orchestrator | Monday 23 February 2026 20:25:41 +0000 (0:00:00.202) 0:00:15.677 ******* 2026-02-23 20:25:48.755084 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755092 | orchestrator | 2026-02-23 20:25:48.755100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755107 | orchestrator | Monday 23 February 2026 20:25:41 +0000 (0:00:00.203) 0:00:15.881 ******* 2026-02-23 20:25:48.755115 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755122 | orchestrator | 2026-02-23 20:25:48.755130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755138 | orchestrator | Monday 23 February 2026 20:25:41 +0000 (0:00:00.199) 0:00:16.081 ******* 2026-02-23 20:25:48.755145 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755153 | orchestrator | 2026-02-23 20:25:48.755160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755168 | orchestrator | Monday 23 February 2026 20:25:41 +0000 (0:00:00.190) 0:00:16.271 ******* 2026-02-23 20:25:48.755176 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755183 | orchestrator | 2026-02-23 20:25:48.755191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755198 | orchestrator | Monday 23 February 2026 20:25:42 +0000 (0:00:00.578) 0:00:16.849 ******* 2026-02-23 20:25:48.755206 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755213 | orchestrator | 2026-02-23 20:25:48.755221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755228 | orchestrator | Monday 23 February 2026 20:25:42 +0000 (0:00:00.209) 0:00:17.059 ******* 2026-02-23 20:25:48.755236 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755244 | orchestrator | 2026-02-23 20:25:48.755251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755259 | orchestrator | Monday 23 February 2026 20:25:42 +0000 (0:00:00.205) 0:00:17.264 ******* 2026-02-23 20:25:48.755266 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755274 | orchestrator | 2026-02-23 20:25:48.755281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755289 | orchestrator | Monday 23 February 2026 20:25:43 +0000 (0:00:00.216) 0:00:17.481 ******* 2026-02-23 20:25:48.755296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0) 2026-02-23 20:25:48.755304 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0) 2026-02-23 20:25:48.755312 | orchestrator | 2026-02-23 20:25:48.755320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755327 | orchestrator | Monday 23 February 2026 20:25:43 +0000 (0:00:00.419) 0:00:17.901 ******* 2026-02-23 20:25:48.755335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654) 2026-02-23 20:25:48.755343 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654) 2026-02-23 20:25:48.755352 | orchestrator | 2026-02-23 20:25:48.755360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755369 | orchestrator | Monday 23 February 2026 20:25:43 +0000 (0:00:00.414) 0:00:18.316 ******* 2026-02-23 20:25:48.755377 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21) 2026-02-23 20:25:48.755386 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21) 2026-02-23 20:25:48.755394 | orchestrator | 2026-02-23 20:25:48.755403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755428 | orchestrator | Monday 23 February 2026 20:25:44 +0000 (0:00:00.438) 0:00:18.754 ******* 2026-02-23 20:25:48.755437 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a) 2026-02-23 20:25:48.755446 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a) 2026-02-23 20:25:48.755455 | orchestrator | 2026-02-23 20:25:48.755469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:48.755477 | orchestrator | Monday 23 February 2026 20:25:44 +0000 (0:00:00.419) 0:00:19.173 ******* 2026-02-23 20:25:48.755523 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-23 20:25:48.755536 | orchestrator | 2026-02-23 20:25:48.755549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755560 | orchestrator | Monday 23 February 2026 20:25:45 +0000 (0:00:00.335) 0:00:19.509 ******* 2026-02-23 20:25:48.755575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-23 20:25:48.755588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-23 20:25:48.755608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-23 20:25:48.755622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-23 20:25:48.755634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-23 20:25:48.755645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-23 20:25:48.755653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-23 20:25:48.755661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-23 20:25:48.755669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-23 20:25:48.755678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-23 20:25:48.755686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-23 20:25:48.755694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-23 20:25:48.755701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-23 20:25:48.755708 | orchestrator | 2026-02-23 20:25:48.755716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755723 | orchestrator | Monday 23 February 2026 20:25:45 +0000 (0:00:00.360) 0:00:19.870 ******* 2026-02-23 20:25:48.755730 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755738 | orchestrator | 2026-02-23 20:25:48.755745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755752 | orchestrator | Monday 23 February 2026 20:25:46 +0000 (0:00:00.662) 0:00:20.532 ******* 2026-02-23 20:25:48.755760 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755767 | orchestrator | 2026-02-23 20:25:48.755775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755782 | orchestrator | Monday 23 February 2026 20:25:46 +0000 (0:00:00.227) 0:00:20.759 ******* 2026-02-23 20:25:48.755789 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755796 | orchestrator | 2026-02-23 20:25:48.755804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755811 | orchestrator | Monday 23 February 2026 20:25:46 +0000 (0:00:00.209) 0:00:20.968 ******* 2026-02-23 20:25:48.755818 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755825 | orchestrator | 2026-02-23 20:25:48.755833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755840 | orchestrator | Monday 23 February 2026 20:25:46 +0000 (0:00:00.203) 0:00:21.171 ******* 2026-02-23 20:25:48.755847 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755855 | orchestrator | 2026-02-23 20:25:48.755862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755869 | orchestrator | Monday 23 February 2026 20:25:47 +0000 (0:00:00.193) 0:00:21.365 ******* 2026-02-23 20:25:48.755877 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755891 | orchestrator | 2026-02-23 20:25:48.755903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755914 | orchestrator | Monday 23 February 2026 20:25:47 +0000 (0:00:00.199) 0:00:21.565 ******* 2026-02-23 20:25:48.755926 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755938 | orchestrator | 2026-02-23 20:25:48.755949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.755959 | orchestrator | Monday 23 February 2026 20:25:47 +0000 (0:00:00.204) 0:00:21.770 ******* 2026-02-23 20:25:48.755971 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:48.755984 | orchestrator | 2026-02-23 20:25:48.755996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.756004 | orchestrator | Monday 23 February 2026 20:25:47 +0000 (0:00:00.217) 0:00:21.987 ******* 2026-02-23 20:25:48.756011 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-23 20:25:48.756020 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-23 20:25:48.756027 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-23 20:25:48.756035 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-23 20:25:48.756042 | orchestrator | 2026-02-23 20:25:48.756049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:48.756056 | orchestrator | Monday 23 February 2026 20:25:48 +0000 (0:00:00.995) 0:00:22.983 ******* 2026-02-23 20:25:48.756064 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522682 | orchestrator | 2026-02-23 20:25:54.522765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:54.522774 | orchestrator | Monday 23 February 2026 20:25:48 +0000 (0:00:00.198) 0:00:23.181 ******* 2026-02-23 20:25:54.522779 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522785 | orchestrator | 2026-02-23 20:25:54.522790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:54.522794 | orchestrator | Monday 23 February 2026 20:25:49 +0000 (0:00:00.215) 0:00:23.397 ******* 2026-02-23 20:25:54.522799 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522803 | orchestrator | 2026-02-23 20:25:54.522808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:25:54.522813 | orchestrator | Monday 23 February 2026 20:25:49 +0000 (0:00:00.200) 0:00:23.597 ******* 2026-02-23 20:25:54.522817 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522821 | orchestrator | 2026-02-23 20:25:54.522826 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-23 20:25:54.522830 | orchestrator | Monday 23 February 2026 20:25:49 +0000 (0:00:00.659) 0:00:24.257 ******* 2026-02-23 20:25:54.522835 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-23 20:25:54.522839 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-23 20:25:54.522844 | orchestrator | 2026-02-23 20:25:54.522848 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-23 20:25:54.522866 | orchestrator | Monday 23 February 2026 20:25:50 +0000 (0:00:00.168) 0:00:24.426 ******* 2026-02-23 20:25:54.522871 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522875 | orchestrator | 2026-02-23 20:25:54.522880 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-23 20:25:54.522884 | orchestrator | Monday 23 February 2026 20:25:50 +0000 (0:00:00.205) 0:00:24.631 ******* 2026-02-23 20:25:54.522888 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522893 | orchestrator | 2026-02-23 20:25:54.522897 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-23 20:25:54.522904 | orchestrator | Monday 23 February 2026 20:25:50 +0000 (0:00:00.148) 0:00:24.780 ******* 2026-02-23 20:25:54.522909 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522913 | orchestrator | 2026-02-23 20:25:54.522917 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-23 20:25:54.522922 | orchestrator | Monday 23 February 2026 20:25:50 +0000 (0:00:00.153) 0:00:24.934 ******* 2026-02-23 20:25:54.522941 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:25:54.522946 | orchestrator | 2026-02-23 20:25:54.522951 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-23 20:25:54.522955 | orchestrator | Monday 23 February 2026 20:25:50 +0000 (0:00:00.135) 0:00:25.070 ******* 2026-02-23 20:25:54.522960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14837c-f03f-563c-b8ac-393f544981fc'}}) 2026-02-23 20:25:54.522965 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21252442-555c-5549-b537-6075952af6e0'}}) 2026-02-23 20:25:54.522970 | orchestrator | 2026-02-23 20:25:54.522974 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-23 20:25:54.522979 | orchestrator | Monday 23 February 2026 20:25:50 +0000 (0:00:00.162) 0:00:25.232 ******* 2026-02-23 20:25:54.522984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14837c-f03f-563c-b8ac-393f544981fc'}})  2026-02-23 20:25:54.522990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21252442-555c-5549-b537-6075952af6e0'}})  2026-02-23 20:25:54.522994 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.522999 | orchestrator | 2026-02-23 20:25:54.523003 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-23 20:25:54.523008 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.146) 0:00:25.379 ******* 2026-02-23 20:25:54.523013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14837c-f03f-563c-b8ac-393f544981fc'}})  2026-02-23 20:25:54.523017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21252442-555c-5549-b537-6075952af6e0'}})  2026-02-23 20:25:54.523022 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523027 | orchestrator | 2026-02-23 20:25:54.523031 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-23 20:25:54.523036 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.145) 0:00:25.524 ******* 2026-02-23 20:25:54.523041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14837c-f03f-563c-b8ac-393f544981fc'}})  2026-02-23 20:25:54.523045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21252442-555c-5549-b537-6075952af6e0'}})  2026-02-23 20:25:54.523050 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523054 | orchestrator | 2026-02-23 20:25:54.523059 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-23 20:25:54.523063 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.108) 0:00:25.633 ******* 2026-02-23 20:25:54.523068 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:25:54.523072 | orchestrator | 2026-02-23 20:25:54.523077 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-23 20:25:54.523082 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.090) 0:00:25.723 ******* 2026-02-23 20:25:54.523086 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:25:54.523091 | orchestrator | 2026-02-23 20:25:54.523095 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-23 20:25:54.523100 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.131) 0:00:25.855 ******* 2026-02-23 20:25:54.523115 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523120 | orchestrator | 2026-02-23 20:25:54.523124 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-23 20:25:54.523129 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.253) 0:00:26.109 ******* 2026-02-23 20:25:54.523133 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523138 | orchestrator | 2026-02-23 20:25:54.523143 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-23 20:25:54.523147 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.123) 0:00:26.232 ******* 2026-02-23 20:25:54.523152 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523168 | orchestrator | 2026-02-23 20:25:54.523173 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-23 20:25:54.523177 | orchestrator | Monday 23 February 2026 20:25:51 +0000 (0:00:00.106) 0:00:26.338 ******* 2026-02-23 20:25:54.523182 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:25:54.523186 | orchestrator |  "ceph_osd_devices": { 2026-02-23 20:25:54.523191 | orchestrator |  "sdb": { 2026-02-23 20:25:54.523196 | orchestrator |  "osd_lvm_uuid": "2b14837c-f03f-563c-b8ac-393f544981fc" 2026-02-23 20:25:54.523201 | orchestrator |  }, 2026-02-23 20:25:54.523206 | orchestrator |  "sdc": { 2026-02-23 20:25:54.523210 | orchestrator |  "osd_lvm_uuid": "21252442-555c-5549-b537-6075952af6e0" 2026-02-23 20:25:54.523215 | orchestrator |  } 2026-02-23 20:25:54.523220 | orchestrator |  } 2026-02-23 20:25:54.523225 | orchestrator | } 2026-02-23 20:25:54.523230 | orchestrator | 2026-02-23 20:25:54.523235 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-23 20:25:54.523241 | orchestrator | Monday 23 February 2026 20:25:52 +0000 (0:00:00.101) 0:00:26.440 ******* 2026-02-23 20:25:54.523246 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523251 | orchestrator | 2026-02-23 20:25:54.523256 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-23 20:25:54.523261 | orchestrator | Monday 23 February 2026 20:25:52 +0000 (0:00:00.090) 0:00:26.530 ******* 2026-02-23 20:25:54.523266 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523271 | orchestrator | 2026-02-23 20:25:54.523276 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-23 20:25:54.523282 | orchestrator | Monday 23 February 2026 20:25:52 +0000 (0:00:00.107) 0:00:26.637 ******* 2026-02-23 20:25:54.523287 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:25:54.523292 | orchestrator | 2026-02-23 20:25:54.523297 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-23 20:25:54.523305 | orchestrator | Monday 23 February 2026 20:25:52 +0000 (0:00:00.103) 0:00:26.742 ******* 2026-02-23 20:25:54.523310 | orchestrator | changed: [testbed-node-4] => { 2026-02-23 20:25:54.523316 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-23 20:25:54.523321 | orchestrator |  "ceph_osd_devices": { 2026-02-23 20:25:54.523326 | orchestrator |  "sdb": { 2026-02-23 20:25:54.523331 | orchestrator |  "osd_lvm_uuid": "2b14837c-f03f-563c-b8ac-393f544981fc" 2026-02-23 20:25:54.523336 | orchestrator |  }, 2026-02-23 20:25:54.523341 | orchestrator |  "sdc": { 2026-02-23 20:25:54.523346 | orchestrator |  "osd_lvm_uuid": "21252442-555c-5549-b537-6075952af6e0" 2026-02-23 20:25:54.523351 | orchestrator |  } 2026-02-23 20:25:54.523356 | orchestrator |  }, 2026-02-23 20:25:54.523362 | orchestrator |  "lvm_volumes": [ 2026-02-23 20:25:54.523367 | orchestrator |  { 2026-02-23 20:25:54.523372 | orchestrator |  "data": "osd-block-2b14837c-f03f-563c-b8ac-393f544981fc", 2026-02-23 20:25:54.523377 | orchestrator |  "data_vg": "ceph-2b14837c-f03f-563c-b8ac-393f544981fc" 2026-02-23 20:25:54.523382 | orchestrator |  }, 2026-02-23 20:25:54.523387 | orchestrator |  { 2026-02-23 20:25:54.523392 | orchestrator |  "data": "osd-block-21252442-555c-5549-b537-6075952af6e0", 2026-02-23 20:25:54.523397 | orchestrator |  "data_vg": "ceph-21252442-555c-5549-b537-6075952af6e0" 2026-02-23 20:25:54.523402 | orchestrator |  } 2026-02-23 20:25:54.523407 | orchestrator |  ] 2026-02-23 20:25:54.523412 | orchestrator |  } 2026-02-23 20:25:54.523417 | orchestrator | } 2026-02-23 20:25:54.523422 | orchestrator | 2026-02-23 20:25:54.523427 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-23 20:25:54.523432 | orchestrator | Monday 23 February 2026 20:25:52 +0000 (0:00:00.188) 0:00:26.931 ******* 2026-02-23 20:25:54.523437 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-23 20:25:54.523442 | orchestrator | 2026-02-23 20:25:54.523451 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-23 20:25:54.523456 | orchestrator | 2026-02-23 20:25:54.523461 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 20:25:54.523466 | orchestrator | Monday 23 February 2026 20:25:53 +0000 (0:00:00.885) 0:00:27.816 ******* 2026-02-23 20:25:54.523471 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-23 20:25:54.523476 | orchestrator | 2026-02-23 20:25:54.523481 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-23 20:25:54.523500 | orchestrator | Monday 23 February 2026 20:25:53 +0000 (0:00:00.507) 0:00:28.323 ******* 2026-02-23 20:25:54.523505 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:25:54.523510 | orchestrator | 2026-02-23 20:25:54.523515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:25:54.523520 | orchestrator | Monday 23 February 2026 20:25:54 +0000 (0:00:00.204) 0:00:28.527 ******* 2026-02-23 20:25:54.523526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-23 20:25:54.523531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-23 20:25:54.523536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-23 20:25:54.523541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-23 20:25:54.523546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-23 20:25:54.523554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-23 20:26:01.537808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-23 20:26:01.537917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-23 20:26:01.537932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-23 20:26:01.537942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-23 20:26:01.537952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-23 20:26:01.537962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-23 20:26:01.537972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-23 20:26:01.537982 | orchestrator | 2026-02-23 20:26:01.537993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538004 | orchestrator | Monday 23 February 2026 20:25:54 +0000 (0:00:00.438) 0:00:28.966 ******* 2026-02-23 20:26:01.538074 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538087 | orchestrator | 2026-02-23 20:26:01.538107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538118 | orchestrator | Monday 23 February 2026 20:25:54 +0000 (0:00:00.200) 0:00:29.167 ******* 2026-02-23 20:26:01.538127 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538137 | orchestrator | 2026-02-23 20:26:01.538147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538157 | orchestrator | Monday 23 February 2026 20:25:54 +0000 (0:00:00.172) 0:00:29.340 ******* 2026-02-23 20:26:01.538166 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538176 | orchestrator | 2026-02-23 20:26:01.538186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538196 | orchestrator | Monday 23 February 2026 20:25:55 +0000 (0:00:00.146) 0:00:29.487 ******* 2026-02-23 20:26:01.538206 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538216 | orchestrator | 2026-02-23 20:26:01.538226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538236 | orchestrator | Monday 23 February 2026 20:25:55 +0000 (0:00:00.157) 0:00:29.644 ******* 2026-02-23 20:26:01.538267 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538277 | orchestrator | 2026-02-23 20:26:01.538287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538297 | orchestrator | Monday 23 February 2026 20:25:55 +0000 (0:00:00.179) 0:00:29.823 ******* 2026-02-23 20:26:01.538307 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538317 | orchestrator | 2026-02-23 20:26:01.538327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538337 | orchestrator | Monday 23 February 2026 20:25:55 +0000 (0:00:00.171) 0:00:29.995 ******* 2026-02-23 20:26:01.538348 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538359 | orchestrator | 2026-02-23 20:26:01.538371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538383 | orchestrator | Monday 23 February 2026 20:25:55 +0000 (0:00:00.163) 0:00:30.158 ******* 2026-02-23 20:26:01.538393 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538404 | orchestrator | 2026-02-23 20:26:01.538415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538426 | orchestrator | Monday 23 February 2026 20:25:56 +0000 (0:00:00.233) 0:00:30.391 ******* 2026-02-23 20:26:01.538437 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a) 2026-02-23 20:26:01.538450 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a) 2026-02-23 20:26:01.538461 | orchestrator | 2026-02-23 20:26:01.538472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538484 | orchestrator | Monday 23 February 2026 20:25:56 +0000 (0:00:00.591) 0:00:30.983 ******* 2026-02-23 20:26:01.538571 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163) 2026-02-23 20:26:01.538584 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163) 2026-02-23 20:26:01.538595 | orchestrator | 2026-02-23 20:26:01.538607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538640 | orchestrator | Monday 23 February 2026 20:25:57 +0000 (0:00:00.382) 0:00:31.365 ******* 2026-02-23 20:26:01.538652 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33) 2026-02-23 20:26:01.538663 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33) 2026-02-23 20:26:01.538674 | orchestrator | 2026-02-23 20:26:01.538686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538697 | orchestrator | Monday 23 February 2026 20:25:57 +0000 (0:00:00.385) 0:00:31.751 ******* 2026-02-23 20:26:01.538708 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0) 2026-02-23 20:26:01.538720 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0) 2026-02-23 20:26:01.538731 | orchestrator | 2026-02-23 20:26:01.538741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:26:01.538750 | orchestrator | Monday 23 February 2026 20:25:57 +0000 (0:00:00.330) 0:00:32.081 ******* 2026-02-23 20:26:01.538760 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-23 20:26:01.538770 | orchestrator | 2026-02-23 20:26:01.538780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.538808 | orchestrator | Monday 23 February 2026 20:25:58 +0000 (0:00:00.330) 0:00:32.411 ******* 2026-02-23 20:26:01.538818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-23 20:26:01.538828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-23 20:26:01.538838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-23 20:26:01.538848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-23 20:26:01.538866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-23 20:26:01.538876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-23 20:26:01.538885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-23 20:26:01.538895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-23 20:26:01.538905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-23 20:26:01.538914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-23 20:26:01.538924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-23 20:26:01.538933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-23 20:26:01.538943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-23 20:26:01.538953 | orchestrator | 2026-02-23 20:26:01.538962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.538972 | orchestrator | Monday 23 February 2026 20:25:58 +0000 (0:00:00.363) 0:00:32.775 ******* 2026-02-23 20:26:01.538982 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.538992 | orchestrator | 2026-02-23 20:26:01.539001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539011 | orchestrator | Monday 23 February 2026 20:25:58 +0000 (0:00:00.183) 0:00:32.958 ******* 2026-02-23 20:26:01.539020 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539030 | orchestrator | 2026-02-23 20:26:01.539040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539049 | orchestrator | Monday 23 February 2026 20:25:58 +0000 (0:00:00.185) 0:00:33.144 ******* 2026-02-23 20:26:01.539059 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539068 | orchestrator | 2026-02-23 20:26:01.539078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539088 | orchestrator | Monday 23 February 2026 20:25:58 +0000 (0:00:00.184) 0:00:33.328 ******* 2026-02-23 20:26:01.539098 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539107 | orchestrator | 2026-02-23 20:26:01.539117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539127 | orchestrator | Monday 23 February 2026 20:25:59 +0000 (0:00:00.195) 0:00:33.524 ******* 2026-02-23 20:26:01.539136 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539146 | orchestrator | 2026-02-23 20:26:01.539155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539165 | orchestrator | Monday 23 February 2026 20:25:59 +0000 (0:00:00.176) 0:00:33.700 ******* 2026-02-23 20:26:01.539174 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539184 | orchestrator | 2026-02-23 20:26:01.539194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539203 | orchestrator | Monday 23 February 2026 20:25:59 +0000 (0:00:00.446) 0:00:34.147 ******* 2026-02-23 20:26:01.539213 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539222 | orchestrator | 2026-02-23 20:26:01.539232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539241 | orchestrator | Monday 23 February 2026 20:25:59 +0000 (0:00:00.182) 0:00:34.329 ******* 2026-02-23 20:26:01.539251 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539260 | orchestrator | 2026-02-23 20:26:01.539270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539280 | orchestrator | Monday 23 February 2026 20:26:00 +0000 (0:00:00.171) 0:00:34.501 ******* 2026-02-23 20:26:01.539289 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-23 20:26:01.539305 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-23 20:26:01.539316 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-23 20:26:01.539325 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-23 20:26:01.539335 | orchestrator | 2026-02-23 20:26:01.539345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539355 | orchestrator | Monday 23 February 2026 20:26:00 +0000 (0:00:00.603) 0:00:35.104 ******* 2026-02-23 20:26:01.539364 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539374 | orchestrator | 2026-02-23 20:26:01.539383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539393 | orchestrator | Monday 23 February 2026 20:26:00 +0000 (0:00:00.188) 0:00:35.293 ******* 2026-02-23 20:26:01.539403 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539412 | orchestrator | 2026-02-23 20:26:01.539422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539432 | orchestrator | Monday 23 February 2026 20:26:01 +0000 (0:00:00.194) 0:00:35.488 ******* 2026-02-23 20:26:01.539441 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539451 | orchestrator | 2026-02-23 20:26:01.539460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:26:01.539470 | orchestrator | Monday 23 February 2026 20:26:01 +0000 (0:00:00.195) 0:00:35.684 ******* 2026-02-23 20:26:01.539480 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:01.539533 | orchestrator | 2026-02-23 20:26:01.539549 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-23 20:26:05.885932 | orchestrator | Monday 23 February 2026 20:26:01 +0000 (0:00:00.208) 0:00:35.892 ******* 2026-02-23 20:26:05.886095 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-23 20:26:05.886113 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-23 20:26:05.886124 | orchestrator | 2026-02-23 20:26:05.886136 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-23 20:26:05.886146 | orchestrator | Monday 23 February 2026 20:26:01 +0000 (0:00:00.173) 0:00:36.066 ******* 2026-02-23 20:26:05.886156 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886182 | orchestrator | 2026-02-23 20:26:05.886192 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-23 20:26:05.886212 | orchestrator | Monday 23 February 2026 20:26:01 +0000 (0:00:00.139) 0:00:36.206 ******* 2026-02-23 20:26:05.886241 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886251 | orchestrator | 2026-02-23 20:26:05.886261 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-23 20:26:05.886271 | orchestrator | Monday 23 February 2026 20:26:01 +0000 (0:00:00.135) 0:00:36.341 ******* 2026-02-23 20:26:05.886280 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886290 | orchestrator | 2026-02-23 20:26:05.886301 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-23 20:26:05.886311 | orchestrator | Monday 23 February 2026 20:26:02 +0000 (0:00:00.342) 0:00:36.684 ******* 2026-02-23 20:26:05.886321 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:26:05.886331 | orchestrator | 2026-02-23 20:26:05.886341 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-23 20:26:05.886350 | orchestrator | Monday 23 February 2026 20:26:02 +0000 (0:00:00.152) 0:00:36.837 ******* 2026-02-23 20:26:05.886360 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '086e8658-baeb-56a9-865d-4af6c70c9ca3'}}) 2026-02-23 20:26:05.886375 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '721c0c76-436b-5140-8464-e8c748d186e3'}}) 2026-02-23 20:26:05.886384 | orchestrator | 2026-02-23 20:26:05.886394 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-23 20:26:05.886404 | orchestrator | Monday 23 February 2026 20:26:02 +0000 (0:00:00.176) 0:00:37.013 ******* 2026-02-23 20:26:05.886414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '086e8658-baeb-56a9-865d-4af6c70c9ca3'}})  2026-02-23 20:26:05.886449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '721c0c76-436b-5140-8464-e8c748d186e3'}})  2026-02-23 20:26:05.886460 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886471 | orchestrator | 2026-02-23 20:26:05.886483 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-23 20:26:05.886523 | orchestrator | Monday 23 February 2026 20:26:02 +0000 (0:00:00.146) 0:00:37.159 ******* 2026-02-23 20:26:05.886534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '086e8658-baeb-56a9-865d-4af6c70c9ca3'}})  2026-02-23 20:26:05.886546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '721c0c76-436b-5140-8464-e8c748d186e3'}})  2026-02-23 20:26:05.886555 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886565 | orchestrator | 2026-02-23 20:26:05.886574 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-23 20:26:05.886584 | orchestrator | Monday 23 February 2026 20:26:02 +0000 (0:00:00.177) 0:00:37.337 ******* 2026-02-23 20:26:05.886594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '086e8658-baeb-56a9-865d-4af6c70c9ca3'}})  2026-02-23 20:26:05.886604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '721c0c76-436b-5140-8464-e8c748d186e3'}})  2026-02-23 20:26:05.886613 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886623 | orchestrator | 2026-02-23 20:26:05.886632 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-23 20:26:05.886642 | orchestrator | Monday 23 February 2026 20:26:03 +0000 (0:00:00.216) 0:00:37.553 ******* 2026-02-23 20:26:05.886651 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:26:05.886660 | orchestrator | 2026-02-23 20:26:05.886670 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-23 20:26:05.886679 | orchestrator | Monday 23 February 2026 20:26:03 +0000 (0:00:00.156) 0:00:37.710 ******* 2026-02-23 20:26:05.886689 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:26:05.886698 | orchestrator | 2026-02-23 20:26:05.886707 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-23 20:26:05.886717 | orchestrator | Monday 23 February 2026 20:26:03 +0000 (0:00:00.163) 0:00:37.874 ******* 2026-02-23 20:26:05.886726 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886735 | orchestrator | 2026-02-23 20:26:05.886745 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-23 20:26:05.886755 | orchestrator | Monday 23 February 2026 20:26:03 +0000 (0:00:00.125) 0:00:37.999 ******* 2026-02-23 20:26:05.886764 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886774 | orchestrator | 2026-02-23 20:26:05.886783 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-23 20:26:05.886792 | orchestrator | Monday 23 February 2026 20:26:03 +0000 (0:00:00.149) 0:00:38.149 ******* 2026-02-23 20:26:05.886801 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.886811 | orchestrator | 2026-02-23 20:26:05.886820 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-23 20:26:05.886835 | orchestrator | Monday 23 February 2026 20:26:03 +0000 (0:00:00.122) 0:00:38.272 ******* 2026-02-23 20:26:05.886851 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:26:05.886867 | orchestrator |  "ceph_osd_devices": { 2026-02-23 20:26:05.886882 | orchestrator |  "sdb": { 2026-02-23 20:26:05.886920 | orchestrator |  "osd_lvm_uuid": "086e8658-baeb-56a9-865d-4af6c70c9ca3" 2026-02-23 20:26:05.886936 | orchestrator |  }, 2026-02-23 20:26:05.886951 | orchestrator |  "sdc": { 2026-02-23 20:26:05.886966 | orchestrator |  "osd_lvm_uuid": "721c0c76-436b-5140-8464-e8c748d186e3" 2026-02-23 20:26:05.886981 | orchestrator |  } 2026-02-23 20:26:05.886995 | orchestrator |  } 2026-02-23 20:26:05.887009 | orchestrator | } 2026-02-23 20:26:05.887024 | orchestrator | 2026-02-23 20:26:05.887052 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-23 20:26:05.887068 | orchestrator | Monday 23 February 2026 20:26:04 +0000 (0:00:00.145) 0:00:38.417 ******* 2026-02-23 20:26:05.887083 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.887099 | orchestrator | 2026-02-23 20:26:05.887114 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-23 20:26:05.887129 | orchestrator | Monday 23 February 2026 20:26:04 +0000 (0:00:00.370) 0:00:38.788 ******* 2026-02-23 20:26:05.887145 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.887160 | orchestrator | 2026-02-23 20:26:05.887176 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-23 20:26:05.887192 | orchestrator | Monday 23 February 2026 20:26:04 +0000 (0:00:00.161) 0:00:38.949 ******* 2026-02-23 20:26:05.887208 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:26:05.887223 | orchestrator | 2026-02-23 20:26:05.887237 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-23 20:26:05.887247 | orchestrator | Monday 23 February 2026 20:26:04 +0000 (0:00:00.173) 0:00:39.122 ******* 2026-02-23 20:26:05.887257 | orchestrator | changed: [testbed-node-5] => { 2026-02-23 20:26:05.887266 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-23 20:26:05.887277 | orchestrator |  "ceph_osd_devices": { 2026-02-23 20:26:05.887286 | orchestrator |  "sdb": { 2026-02-23 20:26:05.887296 | orchestrator |  "osd_lvm_uuid": "086e8658-baeb-56a9-865d-4af6c70c9ca3" 2026-02-23 20:26:05.887306 | orchestrator |  }, 2026-02-23 20:26:05.887315 | orchestrator |  "sdc": { 2026-02-23 20:26:05.887325 | orchestrator |  "osd_lvm_uuid": "721c0c76-436b-5140-8464-e8c748d186e3" 2026-02-23 20:26:05.887334 | orchestrator |  } 2026-02-23 20:26:05.887344 | orchestrator |  }, 2026-02-23 20:26:05.887354 | orchestrator |  "lvm_volumes": [ 2026-02-23 20:26:05.887364 | orchestrator |  { 2026-02-23 20:26:05.887374 | orchestrator |  "data": "osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3", 2026-02-23 20:26:05.887383 | orchestrator |  "data_vg": "ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3" 2026-02-23 20:26:05.887393 | orchestrator |  }, 2026-02-23 20:26:05.887407 | orchestrator |  { 2026-02-23 20:26:05.887417 | orchestrator |  "data": "osd-block-721c0c76-436b-5140-8464-e8c748d186e3", 2026-02-23 20:26:05.887427 | orchestrator |  "data_vg": "ceph-721c0c76-436b-5140-8464-e8c748d186e3" 2026-02-23 20:26:05.887436 | orchestrator |  } 2026-02-23 20:26:05.887446 | orchestrator |  ] 2026-02-23 20:26:05.887456 | orchestrator |  } 2026-02-23 20:26:05.887466 | orchestrator | } 2026-02-23 20:26:05.887475 | orchestrator | 2026-02-23 20:26:05.887485 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-23 20:26:05.887572 | orchestrator | Monday 23 February 2026 20:26:05 +0000 (0:00:00.244) 0:00:39.367 ******* 2026-02-23 20:26:05.887583 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-23 20:26:05.887592 | orchestrator | 2026-02-23 20:26:05.887602 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:26:05.887612 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-23 20:26:05.887623 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-23 20:26:05.887633 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-23 20:26:05.887643 | orchestrator | 2026-02-23 20:26:05.887652 | orchestrator | 2026-02-23 20:26:05.887662 | orchestrator | 2026-02-23 20:26:05.887672 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:26:05.887681 | orchestrator | Monday 23 February 2026 20:26:05 +0000 (0:00:00.859) 0:00:40.227 ******* 2026-02-23 20:26:05.887700 | orchestrator | =============================================================================== 2026-02-23 20:26:05.887710 | orchestrator | Write configuration file ------------------------------------------------ 3.59s 2026-02-23 20:26:05.887720 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2026-02-23 20:26:05.887738 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-02-23 20:26:05.887748 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.07s 2026-02-23 20:26:05.887758 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-02-23 20:26:05.887767 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-02-23 20:26:05.887777 | orchestrator | Print configuration data ------------------------------------------------ 0.86s 2026-02-23 20:26:05.887787 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-02-23 20:26:05.887796 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-02-23 20:26:05.887806 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-02-23 20:26:05.887816 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-02-23 20:26:05.887825 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2026-02-23 20:26:05.887835 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-02-23 20:26:05.887856 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-02-23 20:26:06.094738 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.62s 2026-02-23 20:26:06.094837 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-02-23 20:26:06.094852 | orchestrator | Print WAL devices ------------------------------------------------------- 0.60s 2026-02-23 20:26:06.094873 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-02-23 20:26:06.094887 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-02-23 20:26:06.094898 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.52s 2026-02-23 20:26:28.481479 | orchestrator | 2026-02-23 20:26:28 | INFO  | Task f9b0fbd5-449b-408f-bf34-b500fb7db1f9 (sync inventory) is running in background. Output coming soon. 2026-02-23 20:26:53.812469 | orchestrator | 2026-02-23 20:26:29 | INFO  | Starting group_vars file reorganization 2026-02-23 20:26:53.812634 | orchestrator | 2026-02-23 20:26:29 | INFO  | Moved 0 file(s) to their respective directories 2026-02-23 20:26:53.812662 | orchestrator | 2026-02-23 20:26:29 | INFO  | Group_vars file reorganization completed 2026-02-23 20:26:53.812672 | orchestrator | 2026-02-23 20:26:32 | INFO  | Starting variable preparation from inventory 2026-02-23 20:26:53.812703 | orchestrator | 2026-02-23 20:26:35 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-23 20:26:53.812714 | orchestrator | 2026-02-23 20:26:35 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-23 20:26:53.812739 | orchestrator | 2026-02-23 20:26:35 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-23 20:26:53.812749 | orchestrator | 2026-02-23 20:26:35 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-23 20:26:53.812758 | orchestrator | 2026-02-23 20:26:35 | INFO  | Variable preparation completed 2026-02-23 20:26:53.812767 | orchestrator | 2026-02-23 20:26:36 | INFO  | Starting inventory overwrite handling 2026-02-23 20:26:53.812776 | orchestrator | 2026-02-23 20:26:36 | INFO  | Handling group overwrites in 99-overwrite 2026-02-23 20:26:53.812785 | orchestrator | 2026-02-23 20:26:36 | INFO  | Removing group frr:children from 60-generic 2026-02-23 20:26:53.812818 | orchestrator | 2026-02-23 20:26:36 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-23 20:26:53.812827 | orchestrator | 2026-02-23 20:26:36 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-23 20:26:53.812836 | orchestrator | 2026-02-23 20:26:36 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-23 20:26:53.812845 | orchestrator | 2026-02-23 20:26:36 | INFO  | Handling group overwrites in 20-roles 2026-02-23 20:26:53.812854 | orchestrator | 2026-02-23 20:26:36 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-23 20:26:53.812868 | orchestrator | 2026-02-23 20:26:36 | INFO  | Removed 5 group(s) in total 2026-02-23 20:26:53.812883 | orchestrator | 2026-02-23 20:26:36 | INFO  | Inventory overwrite handling completed 2026-02-23 20:26:53.812897 | orchestrator | 2026-02-23 20:26:37 | INFO  | Starting merge of inventory files 2026-02-23 20:26:53.812911 | orchestrator | 2026-02-23 20:26:37 | INFO  | Inventory files merged successfully 2026-02-23 20:26:53.812925 | orchestrator | 2026-02-23 20:26:41 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-23 20:26:53.812938 | orchestrator | 2026-02-23 20:26:52 | INFO  | Successfully wrote ClusterShell configuration 2026-02-23 20:26:53.812952 | orchestrator | [master c64cd1d] 2026-02-23-20-26 2026-02-23 20:26:53.812968 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-23 20:26:55.858789 | orchestrator | 2026-02-23 20:26:55 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-02-23 20:26:55.911671 | orchestrator | 2026-02-23 20:26:55 | INFO  | Task 230d2cbe-8e20-479d-b269-9a794b8423b5 (ceph-create-lvm-devices) was prepared for execution. 2026-02-23 20:26:55.911771 | orchestrator | 2026-02-23 20:26:55 | INFO  | It takes a moment until task 230d2cbe-8e20-479d-b269-9a794b8423b5 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-23 20:27:05.532288 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-23 20:27:05.532385 | orchestrator | 2.16.14 2026-02-23 20:27:05.532399 | orchestrator | 2026-02-23 20:27:05.532407 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-23 20:27:05.532414 | orchestrator | 2026-02-23 20:27:05.532421 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 20:27:05.532428 | orchestrator | Monday 23 February 2026 20:26:59 +0000 (0:00:00.251) 0:00:00.251 ******* 2026-02-23 20:27:05.532435 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 20:27:05.532442 | orchestrator | 2026-02-23 20:27:05.532449 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-23 20:27:05.532455 | orchestrator | Monday 23 February 2026 20:26:59 +0000 (0:00:00.219) 0:00:00.471 ******* 2026-02-23 20:27:05.532461 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:05.532468 | orchestrator | 2026-02-23 20:27:05.532474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532481 | orchestrator | Monday 23 February 2026 20:26:59 +0000 (0:00:00.241) 0:00:00.712 ******* 2026-02-23 20:27:05.532488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-23 20:27:05.532495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-23 20:27:05.532501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-23 20:27:05.532508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-23 20:27:05.532515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-23 20:27:05.532614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-23 20:27:05.532624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-23 20:27:05.532650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-23 20:27:05.532657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-23 20:27:05.532664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-23 20:27:05.532671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-23 20:27:05.532677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-23 20:27:05.532684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-23 20:27:05.532691 | orchestrator | 2026-02-23 20:27:05.532697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532704 | orchestrator | Monday 23 February 2026 20:27:00 +0000 (0:00:00.418) 0:00:01.131 ******* 2026-02-23 20:27:05.532711 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532718 | orchestrator | 2026-02-23 20:27:05.532724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532731 | orchestrator | Monday 23 February 2026 20:27:00 +0000 (0:00:00.184) 0:00:01.315 ******* 2026-02-23 20:27:05.532738 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532745 | orchestrator | 2026-02-23 20:27:05.532752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532758 | orchestrator | Monday 23 February 2026 20:27:00 +0000 (0:00:00.165) 0:00:01.480 ******* 2026-02-23 20:27:05.532769 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532775 | orchestrator | 2026-02-23 20:27:05.532782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532789 | orchestrator | Monday 23 February 2026 20:27:00 +0000 (0:00:00.160) 0:00:01.641 ******* 2026-02-23 20:27:05.532795 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532802 | orchestrator | 2026-02-23 20:27:05.532809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532815 | orchestrator | Monday 23 February 2026 20:27:00 +0000 (0:00:00.190) 0:00:01.831 ******* 2026-02-23 20:27:05.532822 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532828 | orchestrator | 2026-02-23 20:27:05.532834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532857 | orchestrator | Monday 23 February 2026 20:27:00 +0000 (0:00:00.165) 0:00:01.996 ******* 2026-02-23 20:27:05.532864 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532871 | orchestrator | 2026-02-23 20:27:05.532878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532884 | orchestrator | Monday 23 February 2026 20:27:01 +0000 (0:00:00.182) 0:00:02.179 ******* 2026-02-23 20:27:05.532890 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532897 | orchestrator | 2026-02-23 20:27:05.532903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532909 | orchestrator | Monday 23 February 2026 20:27:01 +0000 (0:00:00.196) 0:00:02.375 ******* 2026-02-23 20:27:05.532916 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.532923 | orchestrator | 2026-02-23 20:27:05.532929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532936 | orchestrator | Monday 23 February 2026 20:27:01 +0000 (0:00:00.179) 0:00:02.555 ******* 2026-02-23 20:27:05.532944 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba) 2026-02-23 20:27:05.532952 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba) 2026-02-23 20:27:05.532959 | orchestrator | 2026-02-23 20:27:05.532966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.532992 | orchestrator | Monday 23 February 2026 20:27:01 +0000 (0:00:00.361) 0:00:02.916 ******* 2026-02-23 20:27:05.533008 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da) 2026-02-23 20:27:05.533015 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da) 2026-02-23 20:27:05.533021 | orchestrator | 2026-02-23 20:27:05.533027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.533034 | orchestrator | Monday 23 February 2026 20:27:02 +0000 (0:00:00.526) 0:00:03.443 ******* 2026-02-23 20:27:05.533040 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4) 2026-02-23 20:27:05.533046 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4) 2026-02-23 20:27:05.533051 | orchestrator | 2026-02-23 20:27:05.533058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.533064 | orchestrator | Monday 23 February 2026 20:27:02 +0000 (0:00:00.535) 0:00:03.978 ******* 2026-02-23 20:27:05.533070 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9) 2026-02-23 20:27:05.533076 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9) 2026-02-23 20:27:05.533082 | orchestrator | 2026-02-23 20:27:05.533089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:05.533095 | orchestrator | Monday 23 February 2026 20:27:03 +0000 (0:00:00.682) 0:00:04.660 ******* 2026-02-23 20:27:05.533101 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-23 20:27:05.533107 | orchestrator | 2026-02-23 20:27:05.533113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533120 | orchestrator | Monday 23 February 2026 20:27:03 +0000 (0:00:00.318) 0:00:04.979 ******* 2026-02-23 20:27:05.533126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-23 20:27:05.533132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-23 20:27:05.533138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-23 20:27:05.533145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-23 20:27:05.533151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-23 20:27:05.533163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-23 20:27:05.533171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-23 20:27:05.533177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-23 20:27:05.533183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-23 20:27:05.533189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-23 20:27:05.533196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-23 20:27:05.533205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-23 20:27:05.533212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-23 20:27:05.533218 | orchestrator | 2026-02-23 20:27:05.533224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533230 | orchestrator | Monday 23 February 2026 20:27:04 +0000 (0:00:00.391) 0:00:05.371 ******* 2026-02-23 20:27:05.533237 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533243 | orchestrator | 2026-02-23 20:27:05.533249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533255 | orchestrator | Monday 23 February 2026 20:27:04 +0000 (0:00:00.167) 0:00:05.538 ******* 2026-02-23 20:27:05.533268 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533274 | orchestrator | 2026-02-23 20:27:05.533280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533286 | orchestrator | Monday 23 February 2026 20:27:04 +0000 (0:00:00.165) 0:00:05.703 ******* 2026-02-23 20:27:05.533292 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533299 | orchestrator | 2026-02-23 20:27:05.533305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533311 | orchestrator | Monday 23 February 2026 20:27:04 +0000 (0:00:00.178) 0:00:05.882 ******* 2026-02-23 20:27:05.533317 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533323 | orchestrator | 2026-02-23 20:27:05.533330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533336 | orchestrator | Monday 23 February 2026 20:27:05 +0000 (0:00:00.171) 0:00:06.053 ******* 2026-02-23 20:27:05.533342 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533349 | orchestrator | 2026-02-23 20:27:05.533355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533361 | orchestrator | Monday 23 February 2026 20:27:05 +0000 (0:00:00.166) 0:00:06.219 ******* 2026-02-23 20:27:05.533368 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533375 | orchestrator | 2026-02-23 20:27:05.533381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:05.533387 | orchestrator | Monday 23 February 2026 20:27:05 +0000 (0:00:00.172) 0:00:06.392 ******* 2026-02-23 20:27:05.533393 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:05.533399 | orchestrator | 2026-02-23 20:27:05.533412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:13.423784 | orchestrator | Monday 23 February 2026 20:27:05 +0000 (0:00:00.180) 0:00:06.573 ******* 2026-02-23 20:27:13.423872 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.423883 | orchestrator | 2026-02-23 20:27:13.423891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:13.423899 | orchestrator | Monday 23 February 2026 20:27:05 +0000 (0:00:00.200) 0:00:06.773 ******* 2026-02-23 20:27:13.423906 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-23 20:27:13.423914 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-23 20:27:13.423922 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-23 20:27:13.423929 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-23 20:27:13.423936 | orchestrator | 2026-02-23 20:27:13.423943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:13.423950 | orchestrator | Monday 23 February 2026 20:27:06 +0000 (0:00:00.994) 0:00:07.767 ******* 2026-02-23 20:27:13.423956 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.423963 | orchestrator | 2026-02-23 20:27:13.423970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:13.423977 | orchestrator | Monday 23 February 2026 20:27:06 +0000 (0:00:00.198) 0:00:07.966 ******* 2026-02-23 20:27:13.423984 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.423990 | orchestrator | 2026-02-23 20:27:13.423997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:13.424004 | orchestrator | Monday 23 February 2026 20:27:07 +0000 (0:00:00.197) 0:00:08.164 ******* 2026-02-23 20:27:13.424010 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424017 | orchestrator | 2026-02-23 20:27:13.424024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:13.424030 | orchestrator | Monday 23 February 2026 20:27:07 +0000 (0:00:00.190) 0:00:08.355 ******* 2026-02-23 20:27:13.424037 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424044 | orchestrator | 2026-02-23 20:27:13.424051 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-23 20:27:13.424057 | orchestrator | Monday 23 February 2026 20:27:07 +0000 (0:00:00.172) 0:00:08.528 ******* 2026-02-23 20:27:13.424064 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424089 | orchestrator | 2026-02-23 20:27:13.424096 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-23 20:27:13.424103 | orchestrator | Monday 23 February 2026 20:27:07 +0000 (0:00:00.117) 0:00:08.646 ******* 2026-02-23 20:27:13.424110 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16360c2d-86c0-538a-b982-f32cf88f5f8a'}}) 2026-02-23 20:27:13.424117 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fef89255-3917-5f7c-b809-8ef443377219'}}) 2026-02-23 20:27:13.424123 | orchestrator | 2026-02-23 20:27:13.424130 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-23 20:27:13.424137 | orchestrator | Monday 23 February 2026 20:27:07 +0000 (0:00:00.218) 0:00:08.864 ******* 2026-02-23 20:27:13.424144 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'}) 2026-02-23 20:27:13.424152 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'}) 2026-02-23 20:27:13.424158 | orchestrator | 2026-02-23 20:27:13.424165 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-23 20:27:13.424172 | orchestrator | Monday 23 February 2026 20:27:09 +0000 (0:00:02.001) 0:00:10.866 ******* 2026-02-23 20:27:13.424178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424187 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424193 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424200 | orchestrator | 2026-02-23 20:27:13.424207 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-23 20:27:13.424214 | orchestrator | Monday 23 February 2026 20:27:09 +0000 (0:00:00.138) 0:00:11.005 ******* 2026-02-23 20:27:13.424220 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'}) 2026-02-23 20:27:13.424227 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'}) 2026-02-23 20:27:13.424234 | orchestrator | 2026-02-23 20:27:13.424254 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-23 20:27:13.424261 | orchestrator | Monday 23 February 2026 20:27:11 +0000 (0:00:01.490) 0:00:12.496 ******* 2026-02-23 20:27:13.424267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424281 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424287 | orchestrator | 2026-02-23 20:27:13.424294 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-23 20:27:13.424301 | orchestrator | Monday 23 February 2026 20:27:11 +0000 (0:00:00.159) 0:00:12.655 ******* 2026-02-23 20:27:13.424320 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424327 | orchestrator | 2026-02-23 20:27:13.424334 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-23 20:27:13.424342 | orchestrator | Monday 23 February 2026 20:27:11 +0000 (0:00:00.121) 0:00:12.776 ******* 2026-02-23 20:27:13.424350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424371 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424379 | orchestrator | 2026-02-23 20:27:13.424386 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-23 20:27:13.424394 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.369) 0:00:13.145 ******* 2026-02-23 20:27:13.424401 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424409 | orchestrator | 2026-02-23 20:27:13.424416 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-23 20:27:13.424423 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.141) 0:00:13.287 ******* 2026-02-23 20:27:13.424431 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424446 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424454 | orchestrator | 2026-02-23 20:27:13.424461 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-23 20:27:13.424469 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.157) 0:00:13.445 ******* 2026-02-23 20:27:13.424476 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424483 | orchestrator | 2026-02-23 20:27:13.424491 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-23 20:27:13.424498 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.129) 0:00:13.575 ******* 2026-02-23 20:27:13.424506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424544 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424556 | orchestrator | 2026-02-23 20:27:13.424566 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-23 20:27:13.424579 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.146) 0:00:13.722 ******* 2026-02-23 20:27:13.424589 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:13.424600 | orchestrator | 2026-02-23 20:27:13.424610 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-23 20:27:13.424617 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.148) 0:00:13.870 ******* 2026-02-23 20:27:13.424624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424637 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424644 | orchestrator | 2026-02-23 20:27:13.424651 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-23 20:27:13.424658 | orchestrator | Monday 23 February 2026 20:27:12 +0000 (0:00:00.131) 0:00:14.002 ******* 2026-02-23 20:27:13.424665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424678 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424685 | orchestrator | 2026-02-23 20:27:13.424691 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-23 20:27:13.424703 | orchestrator | Monday 23 February 2026 20:27:13 +0000 (0:00:00.157) 0:00:14.160 ******* 2026-02-23 20:27:13.424710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:13.424717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:13.424724 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424730 | orchestrator | 2026-02-23 20:27:13.424737 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-23 20:27:13.424744 | orchestrator | Monday 23 February 2026 20:27:13 +0000 (0:00:00.158) 0:00:14.319 ******* 2026-02-23 20:27:13.424751 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:13.424757 | orchestrator | 2026-02-23 20:27:13.424764 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-23 20:27:13.424776 | orchestrator | Monday 23 February 2026 20:27:13 +0000 (0:00:00.142) 0:00:14.462 ******* 2026-02-23 20:27:19.640486 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.640638 | orchestrator | 2026-02-23 20:27:19.640654 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-23 20:27:19.640666 | orchestrator | Monday 23 February 2026 20:27:13 +0000 (0:00:00.144) 0:00:14.607 ******* 2026-02-23 20:27:19.640675 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.640684 | orchestrator | 2026-02-23 20:27:19.640693 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-23 20:27:19.640702 | orchestrator | Monday 23 February 2026 20:27:13 +0000 (0:00:00.138) 0:00:14.745 ******* 2026-02-23 20:27:19.640711 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:27:19.640721 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-23 20:27:19.640730 | orchestrator | } 2026-02-23 20:27:19.640739 | orchestrator | 2026-02-23 20:27:19.640748 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-23 20:27:19.640756 | orchestrator | Monday 23 February 2026 20:27:14 +0000 (0:00:00.359) 0:00:15.105 ******* 2026-02-23 20:27:19.640765 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:27:19.640774 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-23 20:27:19.640783 | orchestrator | } 2026-02-23 20:27:19.640792 | orchestrator | 2026-02-23 20:27:19.640800 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-23 20:27:19.640809 | orchestrator | Monday 23 February 2026 20:27:14 +0000 (0:00:00.148) 0:00:15.253 ******* 2026-02-23 20:27:19.640818 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:27:19.640827 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-23 20:27:19.640836 | orchestrator | } 2026-02-23 20:27:19.640845 | orchestrator | 2026-02-23 20:27:19.640854 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-23 20:27:19.640891 | orchestrator | Monday 23 February 2026 20:27:14 +0000 (0:00:00.168) 0:00:15.422 ******* 2026-02-23 20:27:19.640900 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:19.640909 | orchestrator | 2026-02-23 20:27:19.640918 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-23 20:27:19.640926 | orchestrator | Monday 23 February 2026 20:27:15 +0000 (0:00:00.681) 0:00:16.103 ******* 2026-02-23 20:27:19.640939 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:19.640953 | orchestrator | 2026-02-23 20:27:19.640962 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-23 20:27:19.640971 | orchestrator | Monday 23 February 2026 20:27:15 +0000 (0:00:00.540) 0:00:16.644 ******* 2026-02-23 20:27:19.640979 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:19.640988 | orchestrator | 2026-02-23 20:27:19.640996 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-23 20:27:19.641005 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.530) 0:00:17.174 ******* 2026-02-23 20:27:19.641014 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:19.641022 | orchestrator | 2026-02-23 20:27:19.641052 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-23 20:27:19.641062 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.180) 0:00:17.355 ******* 2026-02-23 20:27:19.641072 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641082 | orchestrator | 2026-02-23 20:27:19.641092 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-23 20:27:19.641102 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.107) 0:00:17.463 ******* 2026-02-23 20:27:19.641111 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641121 | orchestrator | 2026-02-23 20:27:19.641131 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-23 20:27:19.641140 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.109) 0:00:17.572 ******* 2026-02-23 20:27:19.641154 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:27:19.641164 | orchestrator |  "vgs_report": { 2026-02-23 20:27:19.641174 | orchestrator |  "vg": [] 2026-02-23 20:27:19.641182 | orchestrator |  } 2026-02-23 20:27:19.641191 | orchestrator | } 2026-02-23 20:27:19.641200 | orchestrator | 2026-02-23 20:27:19.641209 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-23 20:27:19.641217 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.133) 0:00:17.705 ******* 2026-02-23 20:27:19.641226 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641234 | orchestrator | 2026-02-23 20:27:19.641243 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-23 20:27:19.641251 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.125) 0:00:17.830 ******* 2026-02-23 20:27:19.641260 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641268 | orchestrator | 2026-02-23 20:27:19.641277 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-23 20:27:19.641286 | orchestrator | Monday 23 February 2026 20:27:16 +0000 (0:00:00.149) 0:00:17.980 ******* 2026-02-23 20:27:19.641294 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641303 | orchestrator | 2026-02-23 20:27:19.641312 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-23 20:27:19.641320 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.239) 0:00:18.219 ******* 2026-02-23 20:27:19.641329 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641337 | orchestrator | 2026-02-23 20:27:19.641346 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-23 20:27:19.641355 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.138) 0:00:18.358 ******* 2026-02-23 20:27:19.641363 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641372 | orchestrator | 2026-02-23 20:27:19.641380 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-23 20:27:19.641389 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.133) 0:00:18.492 ******* 2026-02-23 20:27:19.641397 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641406 | orchestrator | 2026-02-23 20:27:19.641414 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-23 20:27:19.641423 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.126) 0:00:18.618 ******* 2026-02-23 20:27:19.641431 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641440 | orchestrator | 2026-02-23 20:27:19.641455 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-23 20:27:19.641464 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.127) 0:00:18.746 ******* 2026-02-23 20:27:19.641491 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641500 | orchestrator | 2026-02-23 20:27:19.641509 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-23 20:27:19.641517 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.113) 0:00:18.860 ******* 2026-02-23 20:27:19.641578 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641594 | orchestrator | 2026-02-23 20:27:19.641604 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-23 20:27:19.641620 | orchestrator | Monday 23 February 2026 20:27:17 +0000 (0:00:00.133) 0:00:18.993 ******* 2026-02-23 20:27:19.641629 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641637 | orchestrator | 2026-02-23 20:27:19.641646 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-23 20:27:19.641654 | orchestrator | Monday 23 February 2026 20:27:18 +0000 (0:00:00.116) 0:00:19.110 ******* 2026-02-23 20:27:19.641663 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641672 | orchestrator | 2026-02-23 20:27:19.641694 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-23 20:27:19.641703 | orchestrator | Monday 23 February 2026 20:27:18 +0000 (0:00:00.132) 0:00:19.242 ******* 2026-02-23 20:27:19.641711 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641720 | orchestrator | 2026-02-23 20:27:19.641729 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-23 20:27:19.641737 | orchestrator | Monday 23 February 2026 20:27:18 +0000 (0:00:00.130) 0:00:19.373 ******* 2026-02-23 20:27:19.641746 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641754 | orchestrator | 2026-02-23 20:27:19.641763 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-23 20:27:19.641771 | orchestrator | Monday 23 February 2026 20:27:18 +0000 (0:00:00.124) 0:00:19.497 ******* 2026-02-23 20:27:19.641780 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641788 | orchestrator | 2026-02-23 20:27:19.641797 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-23 20:27:19.641806 | orchestrator | Monday 23 February 2026 20:27:18 +0000 (0:00:00.135) 0:00:19.633 ******* 2026-02-23 20:27:19.641816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:19.641826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:19.641834 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641843 | orchestrator | 2026-02-23 20:27:19.641852 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-23 20:27:19.641864 | orchestrator | Monday 23 February 2026 20:27:18 +0000 (0:00:00.309) 0:00:19.942 ******* 2026-02-23 20:27:19.641873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:19.641885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:19.641898 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641913 | orchestrator | 2026-02-23 20:27:19.641923 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-23 20:27:19.641931 | orchestrator | Monday 23 February 2026 20:27:19 +0000 (0:00:00.147) 0:00:20.090 ******* 2026-02-23 20:27:19.641940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:19.641949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:19.641958 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.641967 | orchestrator | 2026-02-23 20:27:19.641976 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-23 20:27:19.641984 | orchestrator | Monday 23 February 2026 20:27:19 +0000 (0:00:00.176) 0:00:20.266 ******* 2026-02-23 20:27:19.641993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:19.642002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:19.642064 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.642078 | orchestrator | 2026-02-23 20:27:19.642089 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-23 20:27:19.642100 | orchestrator | Monday 23 February 2026 20:27:19 +0000 (0:00:00.165) 0:00:20.432 ******* 2026-02-23 20:27:19.642110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:19.642121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:19.642132 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:19.642142 | orchestrator | 2026-02-23 20:27:19.642153 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-23 20:27:19.642164 | orchestrator | Monday 23 February 2026 20:27:19 +0000 (0:00:00.161) 0:00:20.593 ******* 2026-02-23 20:27:19.642186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:24.556245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:24.556368 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:24.556385 | orchestrator | 2026-02-23 20:27:24.556399 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-23 20:27:24.556413 | orchestrator | Monday 23 February 2026 20:27:19 +0000 (0:00:00.187) 0:00:20.781 ******* 2026-02-23 20:27:24.556425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:24.556437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:24.556448 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:24.556459 | orchestrator | 2026-02-23 20:27:24.556470 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-23 20:27:24.556481 | orchestrator | Monday 23 February 2026 20:27:19 +0000 (0:00:00.150) 0:00:20.931 ******* 2026-02-23 20:27:24.556492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:24.556503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:24.556514 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:24.556580 | orchestrator | 2026-02-23 20:27:24.556600 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-23 20:27:24.556618 | orchestrator | Monday 23 February 2026 20:27:20 +0000 (0:00:00.155) 0:00:21.086 ******* 2026-02-23 20:27:24.556636 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:24.556656 | orchestrator | 2026-02-23 20:27:24.556674 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-23 20:27:24.556719 | orchestrator | Monday 23 February 2026 20:27:20 +0000 (0:00:00.494) 0:00:21.581 ******* 2026-02-23 20:27:24.556757 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:24.556771 | orchestrator | 2026-02-23 20:27:24.556784 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-23 20:27:24.556814 | orchestrator | Monday 23 February 2026 20:27:21 +0000 (0:00:00.507) 0:00:22.089 ******* 2026-02-23 20:27:24.556826 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:27:24.556838 | orchestrator | 2026-02-23 20:27:24.556850 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-23 20:27:24.556863 | orchestrator | Monday 23 February 2026 20:27:21 +0000 (0:00:00.131) 0:00:22.220 ******* 2026-02-23 20:27:24.556901 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'vg_name': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'}) 2026-02-23 20:27:24.556916 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'vg_name': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'}) 2026-02-23 20:27:24.556929 | orchestrator | 2026-02-23 20:27:24.556941 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-23 20:27:24.556954 | orchestrator | Monday 23 February 2026 20:27:21 +0000 (0:00:00.142) 0:00:22.363 ******* 2026-02-23 20:27:24.556967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:24.556979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:24.556996 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:24.557015 | orchestrator | 2026-02-23 20:27:24.557035 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-23 20:27:24.557055 | orchestrator | Monday 23 February 2026 20:27:21 +0000 (0:00:00.289) 0:00:22.653 ******* 2026-02-23 20:27:24.557069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:24.557082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:24.557093 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:24.557111 | orchestrator | 2026-02-23 20:27:24.557129 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-23 20:27:24.557147 | orchestrator | Monday 23 February 2026 20:27:21 +0000 (0:00:00.192) 0:00:22.845 ******* 2026-02-23 20:27:24.557166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'})  2026-02-23 20:27:24.557184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'})  2026-02-23 20:27:24.557201 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:27:24.557221 | orchestrator | 2026-02-23 20:27:24.557239 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-23 20:27:24.557257 | orchestrator | Monday 23 February 2026 20:27:21 +0000 (0:00:00.145) 0:00:22.991 ******* 2026-02-23 20:27:24.557291 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:27:24.557302 | orchestrator |  "lvm_report": { 2026-02-23 20:27:24.557314 | orchestrator |  "lv": [ 2026-02-23 20:27:24.557325 | orchestrator |  { 2026-02-23 20:27:24.557335 | orchestrator |  "lv_name": "osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a", 2026-02-23 20:27:24.557347 | orchestrator |  "vg_name": "ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a" 2026-02-23 20:27:24.557357 | orchestrator |  }, 2026-02-23 20:27:24.557368 | orchestrator |  { 2026-02-23 20:27:24.557379 | orchestrator |  "lv_name": "osd-block-fef89255-3917-5f7c-b809-8ef443377219", 2026-02-23 20:27:24.557390 | orchestrator |  "vg_name": "ceph-fef89255-3917-5f7c-b809-8ef443377219" 2026-02-23 20:27:24.557400 | orchestrator |  } 2026-02-23 20:27:24.557411 | orchestrator |  ], 2026-02-23 20:27:24.557422 | orchestrator |  "pv": [ 2026-02-23 20:27:24.557432 | orchestrator |  { 2026-02-23 20:27:24.557443 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-23 20:27:24.557454 | orchestrator |  "vg_name": "ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a" 2026-02-23 20:27:24.557465 | orchestrator |  }, 2026-02-23 20:27:24.557475 | orchestrator |  { 2026-02-23 20:27:24.557496 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-23 20:27:24.557507 | orchestrator |  "vg_name": "ceph-fef89255-3917-5f7c-b809-8ef443377219" 2026-02-23 20:27:24.557518 | orchestrator |  } 2026-02-23 20:27:24.557583 | orchestrator |  ] 2026-02-23 20:27:24.557595 | orchestrator |  } 2026-02-23 20:27:24.557606 | orchestrator | } 2026-02-23 20:27:24.557617 | orchestrator | 2026-02-23 20:27:24.557628 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-23 20:27:24.557639 | orchestrator | 2026-02-23 20:27:24.557650 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 20:27:24.557661 | orchestrator | Monday 23 February 2026 20:27:22 +0000 (0:00:00.292) 0:00:23.284 ******* 2026-02-23 20:27:24.557672 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-23 20:27:24.557683 | orchestrator | 2026-02-23 20:27:24.557694 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-23 20:27:24.557705 | orchestrator | Monday 23 February 2026 20:27:22 +0000 (0:00:00.250) 0:00:23.535 ******* 2026-02-23 20:27:24.557716 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:24.557727 | orchestrator | 2026-02-23 20:27:24.557738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.557749 | orchestrator | Monday 23 February 2026 20:27:22 +0000 (0:00:00.221) 0:00:23.756 ******* 2026-02-23 20:27:24.557760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-23 20:27:24.557771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-23 20:27:24.557782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-23 20:27:24.557793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-23 20:27:24.557804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-23 20:27:24.557815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-23 20:27:24.557825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-23 20:27:24.557836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-23 20:27:24.557847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-23 20:27:24.557858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-23 20:27:24.557868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-23 20:27:24.557879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-23 20:27:24.557889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-23 20:27:24.557900 | orchestrator | 2026-02-23 20:27:24.557911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.557922 | orchestrator | Monday 23 February 2026 20:27:23 +0000 (0:00:00.422) 0:00:24.178 ******* 2026-02-23 20:27:24.557933 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:24.557943 | orchestrator | 2026-02-23 20:27:24.557954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.557975 | orchestrator | Monday 23 February 2026 20:27:23 +0000 (0:00:00.194) 0:00:24.373 ******* 2026-02-23 20:27:24.557995 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:24.558011 | orchestrator | 2026-02-23 20:27:24.558228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.558242 | orchestrator | Monday 23 February 2026 20:27:23 +0000 (0:00:00.174) 0:00:24.547 ******* 2026-02-23 20:27:24.558253 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:24.558266 | orchestrator | 2026-02-23 20:27:24.558285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.558319 | orchestrator | Monday 23 February 2026 20:27:23 +0000 (0:00:00.474) 0:00:25.022 ******* 2026-02-23 20:27:24.558337 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:24.558355 | orchestrator | 2026-02-23 20:27:24.558366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.558377 | orchestrator | Monday 23 February 2026 20:27:24 +0000 (0:00:00.180) 0:00:25.202 ******* 2026-02-23 20:27:24.558388 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:24.558399 | orchestrator | 2026-02-23 20:27:24.558409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:24.558420 | orchestrator | Monday 23 February 2026 20:27:24 +0000 (0:00:00.177) 0:00:25.379 ******* 2026-02-23 20:27:24.558431 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:24.558442 | orchestrator | 2026-02-23 20:27:24.558466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379044 | orchestrator | Monday 23 February 2026 20:27:24 +0000 (0:00:00.213) 0:00:25.593 ******* 2026-02-23 20:27:35.379145 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379161 | orchestrator | 2026-02-23 20:27:35.379174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379186 | orchestrator | Monday 23 February 2026 20:27:24 +0000 (0:00:00.187) 0:00:25.781 ******* 2026-02-23 20:27:35.379198 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379209 | orchestrator | 2026-02-23 20:27:35.379220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379231 | orchestrator | Monday 23 February 2026 20:27:24 +0000 (0:00:00.182) 0:00:25.963 ******* 2026-02-23 20:27:35.379242 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0) 2026-02-23 20:27:35.379255 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0) 2026-02-23 20:27:35.379266 | orchestrator | 2026-02-23 20:27:35.379277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379288 | orchestrator | Monday 23 February 2026 20:27:25 +0000 (0:00:00.404) 0:00:26.368 ******* 2026-02-23 20:27:35.379299 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654) 2026-02-23 20:27:35.379311 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654) 2026-02-23 20:27:35.379322 | orchestrator | 2026-02-23 20:27:35.379333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379344 | orchestrator | Monday 23 February 2026 20:27:25 +0000 (0:00:00.390) 0:00:26.758 ******* 2026-02-23 20:27:35.379354 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21) 2026-02-23 20:27:35.379366 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21) 2026-02-23 20:27:35.379377 | orchestrator | 2026-02-23 20:27:35.379388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379399 | orchestrator | Monday 23 February 2026 20:27:26 +0000 (0:00:00.374) 0:00:27.133 ******* 2026-02-23 20:27:35.379424 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a) 2026-02-23 20:27:35.379436 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a) 2026-02-23 20:27:35.379447 | orchestrator | 2026-02-23 20:27:35.379458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:35.379469 | orchestrator | Monday 23 February 2026 20:27:26 +0000 (0:00:00.581) 0:00:27.714 ******* 2026-02-23 20:27:35.379480 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-23 20:27:35.379491 | orchestrator | 2026-02-23 20:27:35.379502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379513 | orchestrator | Monday 23 February 2026 20:27:27 +0000 (0:00:00.482) 0:00:28.196 ******* 2026-02-23 20:27:35.379576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-23 20:27:35.379591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-23 20:27:35.379603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-23 20:27:35.379615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-23 20:27:35.379627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-23 20:27:35.379639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-23 20:27:35.379651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-23 20:27:35.379664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-23 20:27:35.379676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-23 20:27:35.379688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-23 20:27:35.379700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-23 20:27:35.379712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-23 20:27:35.379722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-23 20:27:35.379733 | orchestrator | 2026-02-23 20:27:35.379744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379755 | orchestrator | Monday 23 February 2026 20:27:27 +0000 (0:00:00.754) 0:00:28.951 ******* 2026-02-23 20:27:35.379766 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379776 | orchestrator | 2026-02-23 20:27:35.379787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379798 | orchestrator | Monday 23 February 2026 20:27:28 +0000 (0:00:00.194) 0:00:29.145 ******* 2026-02-23 20:27:35.379808 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379819 | orchestrator | 2026-02-23 20:27:35.379830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379840 | orchestrator | Monday 23 February 2026 20:27:28 +0000 (0:00:00.203) 0:00:29.349 ******* 2026-02-23 20:27:35.379851 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379862 | orchestrator | 2026-02-23 20:27:35.379889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379901 | orchestrator | Monday 23 February 2026 20:27:28 +0000 (0:00:00.171) 0:00:29.520 ******* 2026-02-23 20:27:35.379912 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379923 | orchestrator | 2026-02-23 20:27:35.379934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379945 | orchestrator | Monday 23 February 2026 20:27:28 +0000 (0:00:00.214) 0:00:29.735 ******* 2026-02-23 20:27:35.379955 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.379966 | orchestrator | 2026-02-23 20:27:35.379977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.379988 | orchestrator | Monday 23 February 2026 20:27:28 +0000 (0:00:00.190) 0:00:29.925 ******* 2026-02-23 20:27:35.379999 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380009 | orchestrator | 2026-02-23 20:27:35.380020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380031 | orchestrator | Monday 23 February 2026 20:27:29 +0000 (0:00:00.190) 0:00:30.116 ******* 2026-02-23 20:27:35.380042 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380052 | orchestrator | 2026-02-23 20:27:35.380063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380074 | orchestrator | Monday 23 February 2026 20:27:29 +0000 (0:00:00.178) 0:00:30.294 ******* 2026-02-23 20:27:35.380093 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380105 | orchestrator | 2026-02-23 20:27:35.380116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380126 | orchestrator | Monday 23 February 2026 20:27:29 +0000 (0:00:00.178) 0:00:30.472 ******* 2026-02-23 20:27:35.380137 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-23 20:27:35.380148 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-23 20:27:35.380159 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-23 20:27:35.380170 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-23 20:27:35.380181 | orchestrator | 2026-02-23 20:27:35.380192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380203 | orchestrator | Monday 23 February 2026 20:27:30 +0000 (0:00:00.981) 0:00:31.454 ******* 2026-02-23 20:27:35.380213 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380224 | orchestrator | 2026-02-23 20:27:35.380235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380245 | orchestrator | Monday 23 February 2026 20:27:30 +0000 (0:00:00.190) 0:00:31.644 ******* 2026-02-23 20:27:35.380262 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380273 | orchestrator | 2026-02-23 20:27:35.380284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380294 | orchestrator | Monday 23 February 2026 20:27:31 +0000 (0:00:00.674) 0:00:32.319 ******* 2026-02-23 20:27:35.380305 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380316 | orchestrator | 2026-02-23 20:27:35.380327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:35.380338 | orchestrator | Monday 23 February 2026 20:27:31 +0000 (0:00:00.237) 0:00:32.556 ******* 2026-02-23 20:27:35.380349 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380359 | orchestrator | 2026-02-23 20:27:35.380370 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-23 20:27:35.380381 | orchestrator | Monday 23 February 2026 20:27:31 +0000 (0:00:00.203) 0:00:32.760 ******* 2026-02-23 20:27:35.380392 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380402 | orchestrator | 2026-02-23 20:27:35.380413 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-23 20:27:35.380424 | orchestrator | Monday 23 February 2026 20:27:31 +0000 (0:00:00.135) 0:00:32.896 ******* 2026-02-23 20:27:35.380435 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14837c-f03f-563c-b8ac-393f544981fc'}}) 2026-02-23 20:27:35.380446 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21252442-555c-5549-b537-6075952af6e0'}}) 2026-02-23 20:27:35.380457 | orchestrator | 2026-02-23 20:27:35.380468 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-23 20:27:35.380479 | orchestrator | Monday 23 February 2026 20:27:32 +0000 (0:00:00.185) 0:00:33.081 ******* 2026-02-23 20:27:35.380490 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'}) 2026-02-23 20:27:35.380502 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'}) 2026-02-23 20:27:35.380513 | orchestrator | 2026-02-23 20:27:35.380543 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-23 20:27:35.380556 | orchestrator | Monday 23 February 2026 20:27:33 +0000 (0:00:01.898) 0:00:34.980 ******* 2026-02-23 20:27:35.380567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:35.380579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:35.380597 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:35.380608 | orchestrator | 2026-02-23 20:27:35.380620 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-23 20:27:35.380631 | orchestrator | Monday 23 February 2026 20:27:34 +0000 (0:00:00.154) 0:00:35.134 ******* 2026-02-23 20:27:35.380642 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'}) 2026-02-23 20:27:35.380660 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'}) 2026-02-23 20:27:40.942311 | orchestrator | 2026-02-23 20:27:40.942399 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-23 20:27:40.942411 | orchestrator | Monday 23 February 2026 20:27:35 +0000 (0:00:01.388) 0:00:36.522 ******* 2026-02-23 20:27:40.942419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942436 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942444 | orchestrator | 2026-02-23 20:27:40.942452 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-23 20:27:40.942459 | orchestrator | Monday 23 February 2026 20:27:35 +0000 (0:00:00.128) 0:00:36.651 ******* 2026-02-23 20:27:40.942466 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942473 | orchestrator | 2026-02-23 20:27:40.942480 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-23 20:27:40.942487 | orchestrator | Monday 23 February 2026 20:27:35 +0000 (0:00:00.139) 0:00:36.791 ******* 2026-02-23 20:27:40.942494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942508 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942515 | orchestrator | 2026-02-23 20:27:40.942522 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-23 20:27:40.942582 | orchestrator | Monday 23 February 2026 20:27:35 +0000 (0:00:00.151) 0:00:36.943 ******* 2026-02-23 20:27:40.942589 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942596 | orchestrator | 2026-02-23 20:27:40.942603 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-23 20:27:40.942610 | orchestrator | Monday 23 February 2026 20:27:36 +0000 (0:00:00.134) 0:00:37.077 ******* 2026-02-23 20:27:40.942617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942631 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942637 | orchestrator | 2026-02-23 20:27:40.942644 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-23 20:27:40.942651 | orchestrator | Monday 23 February 2026 20:27:36 +0000 (0:00:00.301) 0:00:37.379 ******* 2026-02-23 20:27:40.942658 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942664 | orchestrator | 2026-02-23 20:27:40.942671 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-23 20:27:40.942678 | orchestrator | Monday 23 February 2026 20:27:36 +0000 (0:00:00.119) 0:00:37.498 ******* 2026-02-23 20:27:40.942685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942716 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942723 | orchestrator | 2026-02-23 20:27:40.942730 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-23 20:27:40.942750 | orchestrator | Monday 23 February 2026 20:27:36 +0000 (0:00:00.124) 0:00:37.622 ******* 2026-02-23 20:27:40.942757 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:40.942766 | orchestrator | 2026-02-23 20:27:40.942772 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-23 20:27:40.942779 | orchestrator | Monday 23 February 2026 20:27:36 +0000 (0:00:00.126) 0:00:37.749 ******* 2026-02-23 20:27:40.942786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942799 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942806 | orchestrator | 2026-02-23 20:27:40.942812 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-23 20:27:40.942819 | orchestrator | Monday 23 February 2026 20:27:36 +0000 (0:00:00.165) 0:00:37.914 ******* 2026-02-23 20:27:40.942826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942839 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942846 | orchestrator | 2026-02-23 20:27:40.942853 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-23 20:27:40.942873 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.143) 0:00:38.058 ******* 2026-02-23 20:27:40.942882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:40.942890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:40.942897 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942905 | orchestrator | 2026-02-23 20:27:40.942912 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-23 20:27:40.942920 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.143) 0:00:38.202 ******* 2026-02-23 20:27:40.942928 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942935 | orchestrator | 2026-02-23 20:27:40.942942 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-23 20:27:40.942950 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.144) 0:00:38.346 ******* 2026-02-23 20:27:40.942957 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942964 | orchestrator | 2026-02-23 20:27:40.942972 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-23 20:27:40.942980 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.126) 0:00:38.473 ******* 2026-02-23 20:27:40.942987 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.942995 | orchestrator | 2026-02-23 20:27:40.943002 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-23 20:27:40.943010 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.126) 0:00:38.599 ******* 2026-02-23 20:27:40.943017 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:27:40.943025 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-23 20:27:40.943039 | orchestrator | } 2026-02-23 20:27:40.943047 | orchestrator | 2026-02-23 20:27:40.943055 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-23 20:27:40.943063 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.130) 0:00:38.730 ******* 2026-02-23 20:27:40.943071 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:27:40.943078 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-23 20:27:40.943086 | orchestrator | } 2026-02-23 20:27:40.943093 | orchestrator | 2026-02-23 20:27:40.943105 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-23 20:27:40.943113 | orchestrator | Monday 23 February 2026 20:27:37 +0000 (0:00:00.154) 0:00:38.885 ******* 2026-02-23 20:27:40.943121 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:27:40.943129 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-23 20:27:40.943137 | orchestrator | } 2026-02-23 20:27:40.943145 | orchestrator | 2026-02-23 20:27:40.943152 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-23 20:27:40.943160 | orchestrator | Monday 23 February 2026 20:27:38 +0000 (0:00:00.331) 0:00:39.216 ******* 2026-02-23 20:27:40.943168 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:40.943175 | orchestrator | 2026-02-23 20:27:40.943183 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-23 20:27:40.943190 | orchestrator | Monday 23 February 2026 20:27:38 +0000 (0:00:00.535) 0:00:39.752 ******* 2026-02-23 20:27:40.943198 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:40.943206 | orchestrator | 2026-02-23 20:27:40.943213 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-23 20:27:40.943221 | orchestrator | Monday 23 February 2026 20:27:39 +0000 (0:00:00.517) 0:00:40.270 ******* 2026-02-23 20:27:40.943228 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:40.943234 | orchestrator | 2026-02-23 20:27:40.943241 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-23 20:27:40.943248 | orchestrator | Monday 23 February 2026 20:27:39 +0000 (0:00:00.534) 0:00:40.804 ******* 2026-02-23 20:27:40.943255 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:40.943262 | orchestrator | 2026-02-23 20:27:40.943268 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-23 20:27:40.943275 | orchestrator | Monday 23 February 2026 20:27:39 +0000 (0:00:00.155) 0:00:40.959 ******* 2026-02-23 20:27:40.943282 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.943288 | orchestrator | 2026-02-23 20:27:40.943295 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-23 20:27:40.943302 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.128) 0:00:41.087 ******* 2026-02-23 20:27:40.943308 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.943315 | orchestrator | 2026-02-23 20:27:40.943322 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-23 20:27:40.943328 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.138) 0:00:41.226 ******* 2026-02-23 20:27:40.943335 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:27:40.943342 | orchestrator |  "vgs_report": { 2026-02-23 20:27:40.943349 | orchestrator |  "vg": [] 2026-02-23 20:27:40.943356 | orchestrator |  } 2026-02-23 20:27:40.943363 | orchestrator | } 2026-02-23 20:27:40.943370 | orchestrator | 2026-02-23 20:27:40.943377 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-23 20:27:40.943384 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.149) 0:00:41.375 ******* 2026-02-23 20:27:40.943390 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.943397 | orchestrator | 2026-02-23 20:27:40.943404 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-23 20:27:40.943410 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.143) 0:00:41.519 ******* 2026-02-23 20:27:40.943417 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.943424 | orchestrator | 2026-02-23 20:27:40.943431 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-23 20:27:40.943442 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.169) 0:00:41.689 ******* 2026-02-23 20:27:40.943449 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.943456 | orchestrator | 2026-02-23 20:27:40.943463 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-23 20:27:40.943470 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.152) 0:00:41.842 ******* 2026-02-23 20:27:40.943476 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:40.943483 | orchestrator | 2026-02-23 20:27:40.943493 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-23 20:27:45.615963 | orchestrator | Monday 23 February 2026 20:27:40 +0000 (0:00:00.136) 0:00:41.979 ******* 2026-02-23 20:27:45.616067 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616084 | orchestrator | 2026-02-23 20:27:45.616097 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-23 20:27:45.616108 | orchestrator | Monday 23 February 2026 20:27:41 +0000 (0:00:00.363) 0:00:42.342 ******* 2026-02-23 20:27:45.616120 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616131 | orchestrator | 2026-02-23 20:27:45.616142 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-23 20:27:45.616153 | orchestrator | Monday 23 February 2026 20:27:41 +0000 (0:00:00.141) 0:00:42.484 ******* 2026-02-23 20:27:45.616164 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616175 | orchestrator | 2026-02-23 20:27:45.616186 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-23 20:27:45.616197 | orchestrator | Monday 23 February 2026 20:27:41 +0000 (0:00:00.139) 0:00:42.623 ******* 2026-02-23 20:27:45.616208 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616218 | orchestrator | 2026-02-23 20:27:45.616229 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-23 20:27:45.616240 | orchestrator | Monday 23 February 2026 20:27:41 +0000 (0:00:00.138) 0:00:42.762 ******* 2026-02-23 20:27:45.616251 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616262 | orchestrator | 2026-02-23 20:27:45.616273 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-23 20:27:45.616284 | orchestrator | Monday 23 February 2026 20:27:41 +0000 (0:00:00.131) 0:00:42.894 ******* 2026-02-23 20:27:45.616294 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616305 | orchestrator | 2026-02-23 20:27:45.616316 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-23 20:27:45.616327 | orchestrator | Monday 23 February 2026 20:27:41 +0000 (0:00:00.138) 0:00:43.033 ******* 2026-02-23 20:27:45.616338 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616348 | orchestrator | 2026-02-23 20:27:45.616359 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-23 20:27:45.616370 | orchestrator | Monday 23 February 2026 20:27:42 +0000 (0:00:00.140) 0:00:43.174 ******* 2026-02-23 20:27:45.616399 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616411 | orchestrator | 2026-02-23 20:27:45.616422 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-23 20:27:45.616433 | orchestrator | Monday 23 February 2026 20:27:42 +0000 (0:00:00.143) 0:00:43.317 ******* 2026-02-23 20:27:45.616443 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616455 | orchestrator | 2026-02-23 20:27:45.616466 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-23 20:27:45.616477 | orchestrator | Monday 23 February 2026 20:27:42 +0000 (0:00:00.151) 0:00:43.469 ******* 2026-02-23 20:27:45.616488 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616500 | orchestrator | 2026-02-23 20:27:45.616515 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-23 20:27:45.616616 | orchestrator | Monday 23 February 2026 20:27:42 +0000 (0:00:00.132) 0:00:43.602 ******* 2026-02-23 20:27:45.616634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.616673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.616686 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616698 | orchestrator | 2026-02-23 20:27:45.616711 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-23 20:27:45.616723 | orchestrator | Monday 23 February 2026 20:27:42 +0000 (0:00:00.159) 0:00:43.761 ******* 2026-02-23 20:27:45.616735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.616747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.616760 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616771 | orchestrator | 2026-02-23 20:27:45.616784 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-23 20:27:45.616796 | orchestrator | Monday 23 February 2026 20:27:42 +0000 (0:00:00.164) 0:00:43.926 ******* 2026-02-23 20:27:45.616808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.616819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.616830 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616840 | orchestrator | 2026-02-23 20:27:45.616851 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-23 20:27:45.616862 | orchestrator | Monday 23 February 2026 20:27:43 +0000 (0:00:00.380) 0:00:44.306 ******* 2026-02-23 20:27:45.616873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.616884 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.616894 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616905 | orchestrator | 2026-02-23 20:27:45.616935 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-23 20:27:45.616946 | orchestrator | Monday 23 February 2026 20:27:43 +0000 (0:00:00.170) 0:00:44.477 ******* 2026-02-23 20:27:45.616957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.616969 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.616980 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.616991 | orchestrator | 2026-02-23 20:27:45.617002 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-23 20:27:45.617012 | orchestrator | Monday 23 February 2026 20:27:43 +0000 (0:00:00.166) 0:00:44.643 ******* 2026-02-23 20:27:45.617023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.617034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.617045 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.617055 | orchestrator | 2026-02-23 20:27:45.617066 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-23 20:27:45.617077 | orchestrator | Monday 23 February 2026 20:27:43 +0000 (0:00:00.158) 0:00:44.802 ******* 2026-02-23 20:27:45.617087 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.617106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.617117 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.617128 | orchestrator | 2026-02-23 20:27:45.617139 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-23 20:27:45.617150 | orchestrator | Monday 23 February 2026 20:27:43 +0000 (0:00:00.157) 0:00:44.960 ******* 2026-02-23 20:27:45.617161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.617171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.617182 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.617193 | orchestrator | 2026-02-23 20:27:45.617204 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-23 20:27:45.617214 | orchestrator | Monday 23 February 2026 20:27:44 +0000 (0:00:00.163) 0:00:45.123 ******* 2026-02-23 20:27:45.617225 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:45.617236 | orchestrator | 2026-02-23 20:27:45.617247 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-23 20:27:45.617257 | orchestrator | Monday 23 February 2026 20:27:44 +0000 (0:00:00.539) 0:00:45.662 ******* 2026-02-23 20:27:45.617268 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:45.617279 | orchestrator | 2026-02-23 20:27:45.617289 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-23 20:27:45.617300 | orchestrator | Monday 23 February 2026 20:27:45 +0000 (0:00:00.509) 0:00:46.171 ******* 2026-02-23 20:27:45.617311 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:27:45.617321 | orchestrator | 2026-02-23 20:27:45.617332 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-23 20:27:45.617343 | orchestrator | Monday 23 February 2026 20:27:45 +0000 (0:00:00.135) 0:00:46.307 ******* 2026-02-23 20:27:45.617354 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'vg_name': 'ceph-21252442-555c-5549-b537-6075952af6e0'}) 2026-02-23 20:27:45.617366 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'vg_name': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'}) 2026-02-23 20:27:45.617377 | orchestrator | 2026-02-23 20:27:45.617387 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-23 20:27:45.617398 | orchestrator | Monday 23 February 2026 20:27:45 +0000 (0:00:00.150) 0:00:46.458 ******* 2026-02-23 20:27:45.617409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.617419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:45.617430 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:45.617441 | orchestrator | 2026-02-23 20:27:45.617452 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-23 20:27:45.617462 | orchestrator | Monday 23 February 2026 20:27:45 +0000 (0:00:00.134) 0:00:46.593 ******* 2026-02-23 20:27:45.617473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:45.617491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:51.286276 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:51.286415 | orchestrator | 2026-02-23 20:27:51.286429 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-23 20:27:51.286440 | orchestrator | Monday 23 February 2026 20:27:45 +0000 (0:00:00.159) 0:00:46.752 ******* 2026-02-23 20:27:51.286449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'})  2026-02-23 20:27:51.286459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'})  2026-02-23 20:27:51.286467 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:27:51.286475 | orchestrator | 2026-02-23 20:27:51.286483 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-23 20:27:51.286492 | orchestrator | Monday 23 February 2026 20:27:45 +0000 (0:00:00.156) 0:00:46.909 ******* 2026-02-23 20:27:51.286500 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:27:51.286508 | orchestrator |  "lvm_report": { 2026-02-23 20:27:51.286517 | orchestrator |  "lv": [ 2026-02-23 20:27:51.286579 | orchestrator |  { 2026-02-23 20:27:51.286588 | orchestrator |  "lv_name": "osd-block-21252442-555c-5549-b537-6075952af6e0", 2026-02-23 20:27:51.286597 | orchestrator |  "vg_name": "ceph-21252442-555c-5549-b537-6075952af6e0" 2026-02-23 20:27:51.286605 | orchestrator |  }, 2026-02-23 20:27:51.286612 | orchestrator |  { 2026-02-23 20:27:51.286621 | orchestrator |  "lv_name": "osd-block-2b14837c-f03f-563c-b8ac-393f544981fc", 2026-02-23 20:27:51.286629 | orchestrator |  "vg_name": "ceph-2b14837c-f03f-563c-b8ac-393f544981fc" 2026-02-23 20:27:51.286637 | orchestrator |  } 2026-02-23 20:27:51.286645 | orchestrator |  ], 2026-02-23 20:27:51.286654 | orchestrator |  "pv": [ 2026-02-23 20:27:51.286662 | orchestrator |  { 2026-02-23 20:27:51.286671 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-23 20:27:51.286685 | orchestrator |  "vg_name": "ceph-2b14837c-f03f-563c-b8ac-393f544981fc" 2026-02-23 20:27:51.286695 | orchestrator |  }, 2026-02-23 20:27:51.286704 | orchestrator |  { 2026-02-23 20:27:51.286712 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-23 20:27:51.286721 | orchestrator |  "vg_name": "ceph-21252442-555c-5549-b537-6075952af6e0" 2026-02-23 20:27:51.286730 | orchestrator |  } 2026-02-23 20:27:51.286739 | orchestrator |  ] 2026-02-23 20:27:51.286748 | orchestrator |  } 2026-02-23 20:27:51.286757 | orchestrator | } 2026-02-23 20:27:51.286766 | orchestrator | 2026-02-23 20:27:51.286775 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-23 20:27:51.286784 | orchestrator | 2026-02-23 20:27:51.286793 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 20:27:51.286802 | orchestrator | Monday 23 February 2026 20:27:46 +0000 (0:00:00.408) 0:00:47.317 ******* 2026-02-23 20:27:51.286812 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-23 20:27:51.286821 | orchestrator | 2026-02-23 20:27:51.286830 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-23 20:27:51.286838 | orchestrator | Monday 23 February 2026 20:27:46 +0000 (0:00:00.213) 0:00:47.531 ******* 2026-02-23 20:27:51.286847 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:27:51.286856 | orchestrator | 2026-02-23 20:27:51.286865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.286874 | orchestrator | Monday 23 February 2026 20:27:46 +0000 (0:00:00.223) 0:00:47.754 ******* 2026-02-23 20:27:51.286883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-23 20:27:51.286892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-23 20:27:51.286902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-23 20:27:51.286910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-23 20:27:51.286926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-23 20:27:51.286934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-23 20:27:51.286943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-23 20:27:51.286952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-23 20:27:51.286961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-23 20:27:51.286974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-23 20:27:51.286983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-23 20:27:51.286992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-23 20:27:51.287000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-23 20:27:51.287009 | orchestrator | 2026-02-23 20:27:51.287018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287026 | orchestrator | Monday 23 February 2026 20:27:47 +0000 (0:00:00.390) 0:00:48.145 ******* 2026-02-23 20:27:51.287034 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287042 | orchestrator | 2026-02-23 20:27:51.287050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287058 | orchestrator | Monday 23 February 2026 20:27:47 +0000 (0:00:00.186) 0:00:48.332 ******* 2026-02-23 20:27:51.287066 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287074 | orchestrator | 2026-02-23 20:27:51.287082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287106 | orchestrator | Monday 23 February 2026 20:27:47 +0000 (0:00:00.188) 0:00:48.520 ******* 2026-02-23 20:27:51.287115 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287123 | orchestrator | 2026-02-23 20:27:51.287131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287138 | orchestrator | Monday 23 February 2026 20:27:47 +0000 (0:00:00.184) 0:00:48.705 ******* 2026-02-23 20:27:51.287146 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287154 | orchestrator | 2026-02-23 20:27:51.287162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287170 | orchestrator | Monday 23 February 2026 20:27:47 +0000 (0:00:00.176) 0:00:48.882 ******* 2026-02-23 20:27:51.287178 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287186 | orchestrator | 2026-02-23 20:27:51.287193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287201 | orchestrator | Monday 23 February 2026 20:27:48 +0000 (0:00:00.511) 0:00:49.394 ******* 2026-02-23 20:27:51.287209 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287217 | orchestrator | 2026-02-23 20:27:51.287225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287233 | orchestrator | Monday 23 February 2026 20:27:48 +0000 (0:00:00.192) 0:00:49.586 ******* 2026-02-23 20:27:51.287241 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287249 | orchestrator | 2026-02-23 20:27:51.287257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287265 | orchestrator | Monday 23 February 2026 20:27:48 +0000 (0:00:00.227) 0:00:49.813 ******* 2026-02-23 20:27:51.287273 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:51.287281 | orchestrator | 2026-02-23 20:27:51.287289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287296 | orchestrator | Monday 23 February 2026 20:27:48 +0000 (0:00:00.193) 0:00:50.007 ******* 2026-02-23 20:27:51.287305 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a) 2026-02-23 20:27:51.287318 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a) 2026-02-23 20:27:51.287330 | orchestrator | 2026-02-23 20:27:51.287338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287346 | orchestrator | Monday 23 February 2026 20:27:49 +0000 (0:00:00.442) 0:00:50.449 ******* 2026-02-23 20:27:51.287354 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163) 2026-02-23 20:27:51.287362 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163) 2026-02-23 20:27:51.287370 | orchestrator | 2026-02-23 20:27:51.287378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287385 | orchestrator | Monday 23 February 2026 20:27:49 +0000 (0:00:00.412) 0:00:50.862 ******* 2026-02-23 20:27:51.287393 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33) 2026-02-23 20:27:51.287402 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33) 2026-02-23 20:27:51.287409 | orchestrator | 2026-02-23 20:27:51.287417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287425 | orchestrator | Monday 23 February 2026 20:27:50 +0000 (0:00:00.395) 0:00:51.257 ******* 2026-02-23 20:27:51.287433 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0) 2026-02-23 20:27:51.287441 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0) 2026-02-23 20:27:51.287449 | orchestrator | 2026-02-23 20:27:51.287457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-23 20:27:51.287465 | orchestrator | Monday 23 February 2026 20:27:50 +0000 (0:00:00.424) 0:00:51.682 ******* 2026-02-23 20:27:51.287473 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-23 20:27:51.287481 | orchestrator | 2026-02-23 20:27:51.287488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:51.287496 | orchestrator | Monday 23 February 2026 20:27:50 +0000 (0:00:00.320) 0:00:52.002 ******* 2026-02-23 20:27:51.287504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-23 20:27:51.287512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-23 20:27:51.287520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-23 20:27:51.287543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-23 20:27:51.287551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-23 20:27:51.287559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-23 20:27:51.287566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-23 20:27:51.287574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-23 20:27:51.287582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-23 20:27:51.287590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-23 20:27:51.287598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-23 20:27:51.287610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-23 20:27:59.611118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-23 20:27:59.611245 | orchestrator | 2026-02-23 20:27:59.611264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611277 | orchestrator | Monday 23 February 2026 20:27:51 +0000 (0:00:00.401) 0:00:52.403 ******* 2026-02-23 20:27:59.611314 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611327 | orchestrator | 2026-02-23 20:27:59.611338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611349 | orchestrator | Monday 23 February 2026 20:27:51 +0000 (0:00:00.197) 0:00:52.600 ******* 2026-02-23 20:27:59.611373 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611384 | orchestrator | 2026-02-23 20:27:59.611395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611406 | orchestrator | Monday 23 February 2026 20:27:52 +0000 (0:00:00.530) 0:00:53.131 ******* 2026-02-23 20:27:59.611417 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611427 | orchestrator | 2026-02-23 20:27:59.611438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611449 | orchestrator | Monday 23 February 2026 20:27:52 +0000 (0:00:00.198) 0:00:53.330 ******* 2026-02-23 20:27:59.611460 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611471 | orchestrator | 2026-02-23 20:27:59.611482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611492 | orchestrator | Monday 23 February 2026 20:27:52 +0000 (0:00:00.200) 0:00:53.530 ******* 2026-02-23 20:27:59.611503 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611514 | orchestrator | 2026-02-23 20:27:59.611525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611566 | orchestrator | Monday 23 February 2026 20:27:52 +0000 (0:00:00.203) 0:00:53.733 ******* 2026-02-23 20:27:59.611577 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611587 | orchestrator | 2026-02-23 20:27:59.611613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611627 | orchestrator | Monday 23 February 2026 20:27:52 +0000 (0:00:00.215) 0:00:53.949 ******* 2026-02-23 20:27:59.611639 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611663 | orchestrator | 2026-02-23 20:27:59.611675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611688 | orchestrator | Monday 23 February 2026 20:27:53 +0000 (0:00:00.182) 0:00:54.132 ******* 2026-02-23 20:27:59.611700 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611712 | orchestrator | 2026-02-23 20:27:59.611723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611734 | orchestrator | Monday 23 February 2026 20:27:53 +0000 (0:00:00.175) 0:00:54.308 ******* 2026-02-23 20:27:59.611745 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-23 20:27:59.611757 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-23 20:27:59.611768 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-23 20:27:59.611779 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-23 20:27:59.611789 | orchestrator | 2026-02-23 20:27:59.611801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611812 | orchestrator | Monday 23 February 2026 20:27:53 +0000 (0:00:00.578) 0:00:54.887 ******* 2026-02-23 20:27:59.611822 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611833 | orchestrator | 2026-02-23 20:27:59.611844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611854 | orchestrator | Monday 23 February 2026 20:27:54 +0000 (0:00:00.189) 0:00:55.076 ******* 2026-02-23 20:27:59.611865 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611876 | orchestrator | 2026-02-23 20:27:59.611886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611897 | orchestrator | Monday 23 February 2026 20:27:54 +0000 (0:00:00.184) 0:00:55.260 ******* 2026-02-23 20:27:59.611907 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611918 | orchestrator | 2026-02-23 20:27:59.611929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-23 20:27:59.611940 | orchestrator | Monday 23 February 2026 20:27:54 +0000 (0:00:00.174) 0:00:55.434 ******* 2026-02-23 20:27:59.611962 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.611972 | orchestrator | 2026-02-23 20:27:59.611983 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-23 20:27:59.611994 | orchestrator | Monday 23 February 2026 20:27:54 +0000 (0:00:00.203) 0:00:55.638 ******* 2026-02-23 20:27:59.612004 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612015 | orchestrator | 2026-02-23 20:27:59.612025 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-23 20:27:59.612036 | orchestrator | Monday 23 February 2026 20:27:54 +0000 (0:00:00.271) 0:00:55.909 ******* 2026-02-23 20:27:59.612047 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '086e8658-baeb-56a9-865d-4af6c70c9ca3'}}) 2026-02-23 20:27:59.612058 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '721c0c76-436b-5140-8464-e8c748d186e3'}}) 2026-02-23 20:27:59.612069 | orchestrator | 2026-02-23 20:27:59.612079 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-23 20:27:59.612090 | orchestrator | Monday 23 February 2026 20:27:55 +0000 (0:00:00.200) 0:00:56.110 ******* 2026-02-23 20:27:59.612102 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'}) 2026-02-23 20:27:59.612125 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'}) 2026-02-23 20:27:59.612136 | orchestrator | 2026-02-23 20:27:59.612147 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-23 20:27:59.612175 | orchestrator | Monday 23 February 2026 20:27:56 +0000 (0:00:01.805) 0:00:57.915 ******* 2026-02-23 20:27:59.612188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:27:59.612201 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:27:59.612212 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612222 | orchestrator | 2026-02-23 20:27:59.612233 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-23 20:27:59.612244 | orchestrator | Monday 23 February 2026 20:27:57 +0000 (0:00:00.132) 0:00:58.047 ******* 2026-02-23 20:27:59.612255 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'}) 2026-02-23 20:27:59.612266 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'}) 2026-02-23 20:27:59.612277 | orchestrator | 2026-02-23 20:27:59.612288 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-23 20:27:59.612298 | orchestrator | Monday 23 February 2026 20:27:58 +0000 (0:00:01.318) 0:00:59.366 ******* 2026-02-23 20:27:59.612309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:27:59.612320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:27:59.612331 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612342 | orchestrator | 2026-02-23 20:27:59.612353 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-23 20:27:59.612363 | orchestrator | Monday 23 February 2026 20:27:58 +0000 (0:00:00.141) 0:00:59.508 ******* 2026-02-23 20:27:59.612374 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612385 | orchestrator | 2026-02-23 20:27:59.612395 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-23 20:27:59.612406 | orchestrator | Monday 23 February 2026 20:27:58 +0000 (0:00:00.138) 0:00:59.646 ******* 2026-02-23 20:27:59.612424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:27:59.612436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:27:59.612447 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612457 | orchestrator | 2026-02-23 20:27:59.612468 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-23 20:27:59.612479 | orchestrator | Monday 23 February 2026 20:27:58 +0000 (0:00:00.128) 0:00:59.775 ******* 2026-02-23 20:27:59.612490 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612500 | orchestrator | 2026-02-23 20:27:59.612511 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-23 20:27:59.612522 | orchestrator | Monday 23 February 2026 20:27:58 +0000 (0:00:00.128) 0:00:59.904 ******* 2026-02-23 20:27:59.612560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:27:59.612579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:27:59.612598 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612616 | orchestrator | 2026-02-23 20:27:59.612650 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-23 20:27:59.612682 | orchestrator | Monday 23 February 2026 20:27:58 +0000 (0:00:00.129) 0:01:00.034 ******* 2026-02-23 20:27:59.612709 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612726 | orchestrator | 2026-02-23 20:27:59.612743 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-23 20:27:59.612760 | orchestrator | Monday 23 February 2026 20:27:59 +0000 (0:00:00.132) 0:01:00.167 ******* 2026-02-23 20:27:59.612777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:27:59.612794 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:27:59.612812 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:27:59.612850 | orchestrator | 2026-02-23 20:27:59.612867 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-23 20:27:59.612886 | orchestrator | Monday 23 February 2026 20:27:59 +0000 (0:00:00.147) 0:01:00.314 ******* 2026-02-23 20:27:59.612905 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:27:59.612924 | orchestrator | 2026-02-23 20:27:59.612942 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-23 20:27:59.612959 | orchestrator | Monday 23 February 2026 20:27:59 +0000 (0:00:00.265) 0:01:00.580 ******* 2026-02-23 20:27:59.612983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:05.084146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:05.084258 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.084281 | orchestrator | 2026-02-23 20:28:05.084298 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-23 20:28:05.084317 | orchestrator | Monday 23 February 2026 20:27:59 +0000 (0:00:00.156) 0:01:00.737 ******* 2026-02-23 20:28:05.084335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:05.084350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:05.084389 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.084406 | orchestrator | 2026-02-23 20:28:05.084422 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-23 20:28:05.084438 | orchestrator | Monday 23 February 2026 20:27:59 +0000 (0:00:00.144) 0:01:00.882 ******* 2026-02-23 20:28:05.084454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:05.084470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:05.084485 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.084501 | orchestrator | 2026-02-23 20:28:05.084516 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-23 20:28:05.084579 | orchestrator | Monday 23 February 2026 20:27:59 +0000 (0:00:00.156) 0:01:01.038 ******* 2026-02-23 20:28:05.084598 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.084614 | orchestrator | 2026-02-23 20:28:05.084631 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-23 20:28:05.084648 | orchestrator | Monday 23 February 2026 20:28:00 +0000 (0:00:00.125) 0:01:01.164 ******* 2026-02-23 20:28:05.084664 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.084680 | orchestrator | 2026-02-23 20:28:05.084696 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-23 20:28:05.084712 | orchestrator | Monday 23 February 2026 20:28:00 +0000 (0:00:00.132) 0:01:01.297 ******* 2026-02-23 20:28:05.084728 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.084744 | orchestrator | 2026-02-23 20:28:05.084760 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-23 20:28:05.084776 | orchestrator | Monday 23 February 2026 20:28:00 +0000 (0:00:00.124) 0:01:01.422 ******* 2026-02-23 20:28:05.084793 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:28:05.084810 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-23 20:28:05.084824 | orchestrator | } 2026-02-23 20:28:05.084838 | orchestrator | 2026-02-23 20:28:05.084852 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-23 20:28:05.084869 | orchestrator | Monday 23 February 2026 20:28:00 +0000 (0:00:00.121) 0:01:01.543 ******* 2026-02-23 20:28:05.084885 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:28:05.084899 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-23 20:28:05.084916 | orchestrator | } 2026-02-23 20:28:05.084932 | orchestrator | 2026-02-23 20:28:05.084948 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-23 20:28:05.084963 | orchestrator | Monday 23 February 2026 20:28:00 +0000 (0:00:00.128) 0:01:01.671 ******* 2026-02-23 20:28:05.084979 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:28:05.084994 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-23 20:28:05.085010 | orchestrator | } 2026-02-23 20:28:05.085025 | orchestrator | 2026-02-23 20:28:05.085041 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-23 20:28:05.085058 | orchestrator | Monday 23 February 2026 20:28:00 +0000 (0:00:00.119) 0:01:01.791 ******* 2026-02-23 20:28:05.085075 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:05.085090 | orchestrator | 2026-02-23 20:28:05.085106 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-23 20:28:05.085122 | orchestrator | Monday 23 February 2026 20:28:01 +0000 (0:00:00.505) 0:01:02.297 ******* 2026-02-23 20:28:05.085137 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:05.085153 | orchestrator | 2026-02-23 20:28:05.085168 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-23 20:28:05.085184 | orchestrator | Monday 23 February 2026 20:28:01 +0000 (0:00:00.500) 0:01:02.798 ******* 2026-02-23 20:28:05.085198 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:05.085227 | orchestrator | 2026-02-23 20:28:05.085243 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-23 20:28:05.085259 | orchestrator | Monday 23 February 2026 20:28:02 +0000 (0:00:00.629) 0:01:03.427 ******* 2026-02-23 20:28:05.085273 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:05.085288 | orchestrator | 2026-02-23 20:28:05.085305 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-23 20:28:05.085321 | orchestrator | Monday 23 February 2026 20:28:02 +0000 (0:00:00.133) 0:01:03.561 ******* 2026-02-23 20:28:05.085336 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085351 | orchestrator | 2026-02-23 20:28:05.085365 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-23 20:28:05.085379 | orchestrator | Monday 23 February 2026 20:28:02 +0000 (0:00:00.097) 0:01:03.659 ******* 2026-02-23 20:28:05.085393 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085407 | orchestrator | 2026-02-23 20:28:05.085422 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-23 20:28:05.085436 | orchestrator | Monday 23 February 2026 20:28:02 +0000 (0:00:00.114) 0:01:03.773 ******* 2026-02-23 20:28:05.085451 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:28:05.085467 | orchestrator |  "vgs_report": { 2026-02-23 20:28:05.085482 | orchestrator |  "vg": [] 2026-02-23 20:28:05.085519 | orchestrator |  } 2026-02-23 20:28:05.085558 | orchestrator | } 2026-02-23 20:28:05.085573 | orchestrator | 2026-02-23 20:28:05.085587 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-23 20:28:05.085603 | orchestrator | Monday 23 February 2026 20:28:02 +0000 (0:00:00.133) 0:01:03.906 ******* 2026-02-23 20:28:05.085617 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085632 | orchestrator | 2026-02-23 20:28:05.085647 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-23 20:28:05.085662 | orchestrator | Monday 23 February 2026 20:28:02 +0000 (0:00:00.118) 0:01:04.025 ******* 2026-02-23 20:28:05.085676 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085689 | orchestrator | 2026-02-23 20:28:05.085698 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-23 20:28:05.085709 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.119) 0:01:04.144 ******* 2026-02-23 20:28:05.085724 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085738 | orchestrator | 2026-02-23 20:28:05.085752 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-23 20:28:05.085767 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.141) 0:01:04.285 ******* 2026-02-23 20:28:05.085781 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085796 | orchestrator | 2026-02-23 20:28:05.085810 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-23 20:28:05.085825 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.125) 0:01:04.410 ******* 2026-02-23 20:28:05.085840 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085854 | orchestrator | 2026-02-23 20:28:05.085870 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-23 20:28:05.085884 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.114) 0:01:04.525 ******* 2026-02-23 20:28:05.085898 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085913 | orchestrator | 2026-02-23 20:28:05.085928 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-23 20:28:05.085952 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.123) 0:01:04.649 ******* 2026-02-23 20:28:05.085966 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.085978 | orchestrator | 2026-02-23 20:28:05.085992 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-23 20:28:05.086005 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.124) 0:01:04.774 ******* 2026-02-23 20:28:05.086077 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086091 | orchestrator | 2026-02-23 20:28:05.086104 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-23 20:28:05.086152 | orchestrator | Monday 23 February 2026 20:28:03 +0000 (0:00:00.267) 0:01:05.042 ******* 2026-02-23 20:28:05.086166 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086179 | orchestrator | 2026-02-23 20:28:05.086191 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-23 20:28:05.086204 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.135) 0:01:05.177 ******* 2026-02-23 20:28:05.086218 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086231 | orchestrator | 2026-02-23 20:28:05.086244 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-23 20:28:05.086255 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.123) 0:01:05.300 ******* 2026-02-23 20:28:05.086262 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086270 | orchestrator | 2026-02-23 20:28:05.086278 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-23 20:28:05.086286 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.122) 0:01:05.423 ******* 2026-02-23 20:28:05.086294 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086301 | orchestrator | 2026-02-23 20:28:05.086309 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-23 20:28:05.086317 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.140) 0:01:05.564 ******* 2026-02-23 20:28:05.086325 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086332 | orchestrator | 2026-02-23 20:28:05.086340 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-23 20:28:05.086348 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.126) 0:01:05.690 ******* 2026-02-23 20:28:05.086357 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086371 | orchestrator | 2026-02-23 20:28:05.086384 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-23 20:28:05.086396 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.126) 0:01:05.816 ******* 2026-02-23 20:28:05.086409 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:05.086424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:05.086438 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086452 | orchestrator | 2026-02-23 20:28:05.086466 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-23 20:28:05.086480 | orchestrator | Monday 23 February 2026 20:28:04 +0000 (0:00:00.137) 0:01:05.953 ******* 2026-02-23 20:28:05.086494 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:05.086508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:05.086522 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:05.086562 | orchestrator | 2026-02-23 20:28:05.086576 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-23 20:28:05.086591 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.121) 0:01:06.075 ******* 2026-02-23 20:28:05.086617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.965612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.965725 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.965750 | orchestrator | 2026-02-23 20:28:07.965768 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-23 20:28:07.965789 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.130) 0:01:06.206 ******* 2026-02-23 20:28:07.965841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.965862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.965881 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.965900 | orchestrator | 2026-02-23 20:28:07.965920 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-23 20:28:07.965939 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.125) 0:01:06.331 ******* 2026-02-23 20:28:07.965958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.965989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966000 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966011 | orchestrator | 2026-02-23 20:28:07.966113 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-23 20:28:07.966135 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.136) 0:01:06.468 ******* 2026-02-23 20:28:07.966157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.966175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966194 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966207 | orchestrator | 2026-02-23 20:28:07.966220 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-23 20:28:07.966232 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.260) 0:01:06.728 ******* 2026-02-23 20:28:07.966245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.966258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966271 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966283 | orchestrator | 2026-02-23 20:28:07.966296 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-23 20:28:07.966308 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.152) 0:01:06.880 ******* 2026-02-23 20:28:07.966321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.966333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966345 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966357 | orchestrator | 2026-02-23 20:28:07.966370 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-23 20:28:07.966382 | orchestrator | Monday 23 February 2026 20:28:05 +0000 (0:00:00.145) 0:01:07.026 ******* 2026-02-23 20:28:07.966394 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:07.966409 | orchestrator | 2026-02-23 20:28:07.966422 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-23 20:28:07.966435 | orchestrator | Monday 23 February 2026 20:28:06 +0000 (0:00:00.578) 0:01:07.604 ******* 2026-02-23 20:28:07.966447 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:07.966458 | orchestrator | 2026-02-23 20:28:07.966469 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-23 20:28:07.966493 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.505) 0:01:08.109 ******* 2026-02-23 20:28:07.966504 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:07.966515 | orchestrator | 2026-02-23 20:28:07.966526 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-23 20:28:07.966594 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.135) 0:01:08.245 ******* 2026-02-23 20:28:07.966607 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'vg_name': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'}) 2026-02-23 20:28:07.966619 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'vg_name': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'}) 2026-02-23 20:28:07.966630 | orchestrator | 2026-02-23 20:28:07.966641 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-23 20:28:07.966652 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.148) 0:01:08.393 ******* 2026-02-23 20:28:07.966684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.966696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966707 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966718 | orchestrator | 2026-02-23 20:28:07.966729 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-23 20:28:07.966740 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.159) 0:01:08.552 ******* 2026-02-23 20:28:07.966751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.966762 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966773 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966784 | orchestrator | 2026-02-23 20:28:07.966794 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-23 20:28:07.966805 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.147) 0:01:08.700 ******* 2026-02-23 20:28:07.966816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'})  2026-02-23 20:28:07.966827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'})  2026-02-23 20:28:07.966838 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:07.966848 | orchestrator | 2026-02-23 20:28:07.966860 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-23 20:28:07.966870 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.159) 0:01:08.860 ******* 2026-02-23 20:28:07.966882 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:28:07.966893 | orchestrator |  "lvm_report": { 2026-02-23 20:28:07.966905 | orchestrator |  "lv": [ 2026-02-23 20:28:07.966916 | orchestrator |  { 2026-02-23 20:28:07.966927 | orchestrator |  "lv_name": "osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3", 2026-02-23 20:28:07.966938 | orchestrator |  "vg_name": "ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3" 2026-02-23 20:28:07.966949 | orchestrator |  }, 2026-02-23 20:28:07.966960 | orchestrator |  { 2026-02-23 20:28:07.966971 | orchestrator |  "lv_name": "osd-block-721c0c76-436b-5140-8464-e8c748d186e3", 2026-02-23 20:28:07.966982 | orchestrator |  "vg_name": "ceph-721c0c76-436b-5140-8464-e8c748d186e3" 2026-02-23 20:28:07.966993 | orchestrator |  } 2026-02-23 20:28:07.967004 | orchestrator |  ], 2026-02-23 20:28:07.967015 | orchestrator |  "pv": [ 2026-02-23 20:28:07.967043 | orchestrator |  { 2026-02-23 20:28:07.967054 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-23 20:28:07.967065 | orchestrator |  "vg_name": "ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3" 2026-02-23 20:28:07.967076 | orchestrator |  }, 2026-02-23 20:28:07.967087 | orchestrator |  { 2026-02-23 20:28:07.967098 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-23 20:28:07.967109 | orchestrator |  "vg_name": "ceph-721c0c76-436b-5140-8464-e8c748d186e3" 2026-02-23 20:28:07.967120 | orchestrator |  } 2026-02-23 20:28:07.967131 | orchestrator |  ] 2026-02-23 20:28:07.967142 | orchestrator |  } 2026-02-23 20:28:07.967153 | orchestrator | } 2026-02-23 20:28:07.967164 | orchestrator | 2026-02-23 20:28:07.967175 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:28:07.967186 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-23 20:28:07.967197 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-23 20:28:07.967208 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-23 20:28:07.967219 | orchestrator | 2026-02-23 20:28:07.967230 | orchestrator | 2026-02-23 20:28:07.967241 | orchestrator | 2026-02-23 20:28:07.967251 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:28:07.967262 | orchestrator | Monday 23 February 2026 20:28:07 +0000 (0:00:00.137) 0:01:08.997 ******* 2026-02-23 20:28:07.967273 | orchestrator | =============================================================================== 2026-02-23 20:28:07.967284 | orchestrator | Create block VGs -------------------------------------------------------- 5.71s 2026-02-23 20:28:07.967295 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-02-23 20:28:07.967306 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.72s 2026-02-23 20:28:07.967317 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.69s 2026-02-23 20:28:07.967338 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-02-23 20:28:07.967349 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-02-23 20:28:07.967360 | orchestrator | Add known partitions to the list of available block devices ------------- 1.55s 2026-02-23 20:28:07.967451 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2026-02-23 20:28:07.967474 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-02-23 20:28:08.249848 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-02-23 20:28:08.249961 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-02-23 20:28:08.249976 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2026-02-23 20:28:08.249988 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.69s 2026-02-23 20:28:08.249999 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-02-23 20:28:08.250009 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.68s 2026-02-23 20:28:08.250083 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-23 20:28:08.250096 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-02-23 20:28:08.250107 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.65s 2026-02-23 20:28:08.250118 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.62s 2026-02-23 20:28:08.250130 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.61s 2026-02-23 20:28:20.777460 | orchestrator | 2026-02-23 20:28:20 | INFO  | Prepare task for execution of facts. 2026-02-23 20:28:20.847144 | orchestrator | 2026-02-23 20:28:20 | INFO  | Task 8bf701ce-41bf-4bec-b0ea-cabc559b9218 (facts) was prepared for execution. 2026-02-23 20:28:20.847291 | orchestrator | 2026-02-23 20:28:20 | INFO  | It takes a moment until task 8bf701ce-41bf-4bec-b0ea-cabc559b9218 (facts) has been started and output is visible here. 2026-02-23 20:28:32.743042 | orchestrator | 2026-02-23 20:28:32.743177 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-23 20:28:32.743199 | orchestrator | 2026-02-23 20:28:32.743212 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-23 20:28:32.743224 | orchestrator | Monday 23 February 2026 20:28:24 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-02-23 20:28:32.743237 | orchestrator | ok: [testbed-manager] 2026-02-23 20:28:32.743248 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:28:32.743256 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:28:32.743263 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:28:32.743271 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:28:32.743278 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:28:32.743285 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:32.743293 | orchestrator | 2026-02-23 20:28:32.743300 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-23 20:28:32.743308 | orchestrator | Monday 23 February 2026 20:28:25 +0000 (0:00:01.008) 0:00:01.264 ******* 2026-02-23 20:28:32.743315 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:28:32.743325 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:28:32.743338 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:28:32.743349 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:28:32.743361 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:28:32.743372 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:28:32.743384 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:32.743396 | orchestrator | 2026-02-23 20:28:32.743408 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-23 20:28:32.743421 | orchestrator | 2026-02-23 20:28:32.743434 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 20:28:32.743447 | orchestrator | Monday 23 February 2026 20:28:27 +0000 (0:00:01.093) 0:00:02.357 ******* 2026-02-23 20:28:32.743459 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:28:32.743471 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:28:32.743483 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:28:32.743495 | orchestrator | ok: [testbed-manager] 2026-02-23 20:28:32.743507 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:28:32.743609 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:28:32.743620 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:28:32.743628 | orchestrator | 2026-02-23 20:28:32.743636 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-23 20:28:32.743644 | orchestrator | 2026-02-23 20:28:32.743652 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-23 20:28:32.743661 | orchestrator | Monday 23 February 2026 20:28:31 +0000 (0:00:04.851) 0:00:07.209 ******* 2026-02-23 20:28:32.743670 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:28:32.743678 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:28:32.743686 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:28:32.743695 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:28:32.743702 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:28:32.743711 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:28:32.743723 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:28:32.743734 | orchestrator | 2026-02-23 20:28:32.743745 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:28:32.743758 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743772 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743830 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743844 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743857 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743870 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743882 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:28:32.743894 | orchestrator | 2026-02-23 20:28:32.743906 | orchestrator | 2026-02-23 20:28:32.743918 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:28:32.743930 | orchestrator | Monday 23 February 2026 20:28:32 +0000 (0:00:00.521) 0:00:07.730 ******* 2026-02-23 20:28:32.743943 | orchestrator | =============================================================================== 2026-02-23 20:28:32.743955 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-02-23 20:28:32.743968 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.09s 2026-02-23 20:28:32.743979 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2026-02-23 20:28:32.743989 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-02-23 20:28:45.181207 | orchestrator | 2026-02-23 20:28:45 | INFO  | Prepare task for execution of frr. 2026-02-23 20:28:45.261561 | orchestrator | 2026-02-23 20:28:45 | INFO  | Task 0ec0b314-7bd4-4823-a6a5-24179ed2ba41 (frr) was prepared for execution. 2026-02-23 20:28:45.261678 | orchestrator | 2026-02-23 20:28:45 | INFO  | It takes a moment until task 0ec0b314-7bd4-4823-a6a5-24179ed2ba41 (frr) has been started and output is visible here. 2026-02-23 20:29:10.718840 | orchestrator | 2026-02-23 20:29:10.718956 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-23 20:29:10.718968 | orchestrator | 2026-02-23 20:29:10.718975 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-23 20:29:10.718983 | orchestrator | Monday 23 February 2026 20:28:49 +0000 (0:00:00.232) 0:00:00.232 ******* 2026-02-23 20:29:10.718990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-23 20:29:10.718999 | orchestrator | 2026-02-23 20:29:10.719006 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-23 20:29:10.719013 | orchestrator | Monday 23 February 2026 20:28:49 +0000 (0:00:00.214) 0:00:00.447 ******* 2026-02-23 20:29:10.719020 | orchestrator | changed: [testbed-manager] 2026-02-23 20:29:10.719028 | orchestrator | 2026-02-23 20:29:10.719035 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-23 20:29:10.719042 | orchestrator | Monday 23 February 2026 20:28:51 +0000 (0:00:01.212) 0:00:01.659 ******* 2026-02-23 20:29:10.719048 | orchestrator | changed: [testbed-manager] 2026-02-23 20:29:10.719055 | orchestrator | 2026-02-23 20:29:10.719062 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-23 20:29:10.719068 | orchestrator | Monday 23 February 2026 20:29:00 +0000 (0:00:09.673) 0:00:11.333 ******* 2026-02-23 20:29:10.719075 | orchestrator | ok: [testbed-manager] 2026-02-23 20:29:10.719083 | orchestrator | 2026-02-23 20:29:10.719090 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-23 20:29:10.719097 | orchestrator | Monday 23 February 2026 20:29:01 +0000 (0:00:01.054) 0:00:12.388 ******* 2026-02-23 20:29:10.719104 | orchestrator | changed: [testbed-manager] 2026-02-23 20:29:10.719127 | orchestrator | 2026-02-23 20:29:10.719134 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-23 20:29:10.719141 | orchestrator | Monday 23 February 2026 20:29:02 +0000 (0:00:00.959) 0:00:13.347 ******* 2026-02-23 20:29:10.719149 | orchestrator | ok: [testbed-manager] 2026-02-23 20:29:10.719156 | orchestrator | 2026-02-23 20:29:10.719162 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-02-23 20:29:10.719169 | orchestrator | Monday 23 February 2026 20:29:03 +0000 (0:00:01.208) 0:00:14.555 ******* 2026-02-23 20:29:10.719176 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:29:10.719182 | orchestrator | 2026-02-23 20:29:10.719189 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-02-23 20:29:10.719196 | orchestrator | Monday 23 February 2026 20:29:04 +0000 (0:00:00.152) 0:00:14.708 ******* 2026-02-23 20:29:10.719203 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:29:10.719209 | orchestrator | 2026-02-23 20:29:10.719216 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-02-23 20:29:10.719222 | orchestrator | Monday 23 February 2026 20:29:04 +0000 (0:00:00.152) 0:00:14.860 ******* 2026-02-23 20:29:10.719229 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:29:10.719236 | orchestrator | 2026-02-23 20:29:10.719242 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-23 20:29:10.719249 | orchestrator | Monday 23 February 2026 20:29:04 +0000 (0:00:00.162) 0:00:15.022 ******* 2026-02-23 20:29:10.719256 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:29:10.719262 | orchestrator | 2026-02-23 20:29:10.719269 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-23 20:29:10.719276 | orchestrator | Monday 23 February 2026 20:29:04 +0000 (0:00:00.147) 0:00:15.170 ******* 2026-02-23 20:29:10.719282 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:29:10.719289 | orchestrator | 2026-02-23 20:29:10.719295 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-23 20:29:10.719302 | orchestrator | Monday 23 February 2026 20:29:04 +0000 (0:00:00.160) 0:00:15.330 ******* 2026-02-23 20:29:10.719309 | orchestrator | changed: [testbed-manager] 2026-02-23 20:29:10.719315 | orchestrator | 2026-02-23 20:29:10.719322 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-23 20:29:10.719329 | orchestrator | Monday 23 February 2026 20:29:05 +0000 (0:00:01.193) 0:00:16.524 ******* 2026-02-23 20:29:10.719334 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-23 20:29:10.719340 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-23 20:29:10.719347 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-23 20:29:10.719353 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-23 20:29:10.719359 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-23 20:29:10.719365 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-23 20:29:10.719371 | orchestrator | 2026-02-23 20:29:10.719377 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-23 20:29:10.719388 | orchestrator | Monday 23 February 2026 20:29:07 +0000 (0:00:02.028) 0:00:18.552 ******* 2026-02-23 20:29:10.719396 | orchestrator | ok: [testbed-manager] 2026-02-23 20:29:10.719404 | orchestrator | 2026-02-23 20:29:10.719412 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-23 20:29:10.719421 | orchestrator | Monday 23 February 2026 20:29:09 +0000 (0:00:01.103) 0:00:19.656 ******* 2026-02-23 20:29:10.719430 | orchestrator | changed: [testbed-manager] 2026-02-23 20:29:10.719440 | orchestrator | 2026-02-23 20:29:10.719450 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:29:10.719467 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:29:10.719475 | orchestrator | 2026-02-23 20:29:10.719483 | orchestrator | 2026-02-23 20:29:10.719511 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:29:10.719521 | orchestrator | Monday 23 February 2026 20:29:10 +0000 (0:00:01.335) 0:00:20.991 ******* 2026-02-23 20:29:10.719531 | orchestrator | =============================================================================== 2026-02-23 20:29:10.719569 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.67s 2026-02-23 20:29:10.719578 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.03s 2026-02-23 20:29:10.719587 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2026-02-23 20:29:10.719593 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.21s 2026-02-23 20:29:10.719601 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.21s 2026-02-23 20:29:10.719612 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.19s 2026-02-23 20:29:10.719622 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.10s 2026-02-23 20:29:10.719630 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.05s 2026-02-23 20:29:10.719641 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-02-23 20:29:10.719651 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-02-23 20:29:10.719662 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-02-23 20:29:10.719670 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-02-23 20:29:10.719679 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.15s 2026-02-23 20:29:10.719687 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-02-23 20:29:10.719696 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-02-23 20:29:11.037170 | orchestrator | 2026-02-23 20:29:11.038888 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Feb 23 20:29:11 UTC 2026 2026-02-23 20:29:11.038964 | orchestrator | 2026-02-23 20:29:13.028408 | orchestrator | 2026-02-23 20:29:13 | INFO  | Collection nutshell is prepared for execution 2026-02-23 20:29:13.028466 | orchestrator | 2026-02-23 20:29:13 | INFO  | A [0] - dotfiles 2026-02-23 20:29:23.151067 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - homer 2026-02-23 20:29:23.151144 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - netdata 2026-02-23 20:29:23.151151 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - openstackclient 2026-02-23 20:29:23.151158 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - phpmyadmin 2026-02-23 20:29:23.151163 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - common 2026-02-23 20:29:23.155280 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- loadbalancer 2026-02-23 20:29:23.155386 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [2] --- opensearch 2026-02-23 20:29:23.155407 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [2] --- mariadb-ng 2026-02-23 20:29:23.155874 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [3] ---- horizon 2026-02-23 20:29:23.155889 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [3] ---- keystone 2026-02-23 20:29:23.156089 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- neutron 2026-02-23 20:29:23.156262 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [5] ------ wait-for-nova 2026-02-23 20:29:23.156339 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [6] ------- octavia 2026-02-23 20:29:23.158363 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- barbican 2026-02-23 20:29:23.158441 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- designate 2026-02-23 20:29:23.158884 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- ironic 2026-02-23 20:29:23.158920 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- placement 2026-02-23 20:29:23.158926 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- magnum 2026-02-23 20:29:23.159516 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- openvswitch 2026-02-23 20:29:23.159832 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [2] --- ovn 2026-02-23 20:29:23.160112 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- memcached 2026-02-23 20:29:23.160351 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- redis 2026-02-23 20:29:23.160359 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- rabbitmq-ng 2026-02-23 20:29:23.160696 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - kubernetes 2026-02-23 20:29:23.163344 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- kubeconfig 2026-02-23 20:29:23.163366 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- copy-kubeconfig 2026-02-23 20:29:23.163491 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [0] - ceph 2026-02-23 20:29:23.165742 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [1] -- ceph-pools 2026-02-23 20:29:23.166110 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [2] --- copy-ceph-keys 2026-02-23 20:29:23.166158 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [3] ---- cephclient 2026-02-23 20:29:23.166167 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-23 20:29:23.166449 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- wait-for-keystone 2026-02-23 20:29:23.166467 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-23 20:29:23.166475 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [5] ------ glance 2026-02-23 20:29:23.166589 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [5] ------ cinder 2026-02-23 20:29:23.166849 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [5] ------ nova 2026-02-23 20:29:23.167119 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [4] ----- prometheus 2026-02-23 20:29:23.167140 | orchestrator | 2026-02-23 20:29:23 | INFO  | A [5] ------ grafana 2026-02-23 20:29:23.354818 | orchestrator | 2026-02-23 20:29:23 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-23 20:29:23.354908 | orchestrator | 2026-02-23 20:29:23 | INFO  | Tasks are running in the background 2026-02-23 20:29:26.566442 | orchestrator | 2026-02-23 20:29:26 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-23 20:29:28.679952 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:28.681191 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:28.681238 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:28.681258 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:28.681699 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:28.682296 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:28.683668 | orchestrator | 2026-02-23 20:29:28 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:28.683818 | orchestrator | 2026-02-23 20:29:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:31.752681 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:31.752784 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:31.753122 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:31.753663 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:31.754197 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:31.754671 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:31.784145 | orchestrator | 2026-02-23 20:29:31 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:31.784234 | orchestrator | 2026-02-23 20:29:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:34.796913 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:34.797053 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:34.797598 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:34.798120 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:34.799007 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:34.799364 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:34.799831 | orchestrator | 2026-02-23 20:29:34 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:34.799861 | orchestrator | 2026-02-23 20:29:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:37.935539 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:37.935728 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:37.935744 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:37.935756 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:37.935767 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:37.935779 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:37.935790 | orchestrator | 2026-02-23 20:29:37 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:37.935802 | orchestrator | 2026-02-23 20:29:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:40.944679 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:40.945233 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:40.946140 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:40.946796 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:40.947655 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:40.948406 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:40.949251 | orchestrator | 2026-02-23 20:29:40 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:40.949276 | orchestrator | 2026-02-23 20:29:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:44.055002 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:44.055102 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:44.055117 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:44.055129 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:44.055140 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:44.055151 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:44.055162 | orchestrator | 2026-02-23 20:29:44 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:44.055173 | orchestrator | 2026-02-23 20:29:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:47.131734 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:47.134673 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:47.145787 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state STARTED 2026-02-23 20:29:47.148107 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:47.153618 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:47.184963 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:47.185044 | orchestrator | 2026-02-23 20:29:47 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:47.185054 | orchestrator | 2026-02-23 20:29:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:50.449196 | orchestrator | 2026-02-23 20:29:50.449307 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-23 20:29:50.449334 | orchestrator | 2026-02-23 20:29:50.449353 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-23 20:29:50.449373 | orchestrator | Monday 23 February 2026 20:29:37 +0000 (0:00:01.280) 0:00:01.280 ******* 2026-02-23 20:29:50.449389 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:29:50.449407 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:29:50.449423 | orchestrator | changed: [testbed-manager] 2026-02-23 20:29:50.449441 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:29:50.449457 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:29:50.449474 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:29:50.449491 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:29:50.449507 | orchestrator | 2026-02-23 20:29:50.449525 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-23 20:29:50.449536 | orchestrator | Monday 23 February 2026 20:29:40 +0000 (0:00:03.512) 0:00:04.792 ******* 2026-02-23 20:29:50.449598 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-23 20:29:50.449611 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-23 20:29:50.449629 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-23 20:29:50.449639 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-23 20:29:50.449649 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-23 20:29:50.449658 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-23 20:29:50.449668 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-23 20:29:50.449678 | orchestrator | 2026-02-23 20:29:50.449688 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-23 20:29:50.449698 | orchestrator | Monday 23 February 2026 20:29:42 +0000 (0:00:01.506) 0:00:06.299 ******* 2026-02-23 20:29:50.449713 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.166050', 'end': '2026-02-23 20:29:41.171537', 'delta': '0:00:00.005487', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449733 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.196812', 'end': '2026-02-23 20:29:41.204134', 'delta': '0:00:00.007322', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449745 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.507892', 'end': '2026-02-23 20:29:41.516739', 'delta': '0:00:00.008847', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449784 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.652565', 'end': '2026-02-23 20:29:41.661859', 'delta': '0:00:00.009294', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449813 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.372952', 'end': '2026-02-23 20:29:41.377909', 'delta': '0:00:00.004957', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449826 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.268276', 'end': '2026-02-23 20:29:41.275389', 'delta': '0:00:00.007113', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449837 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-23 20:29:41.791266', 'end': '2026-02-23 20:29:41.801157', 'delta': '0:00:00.009891', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-23 20:29:50.449848 | orchestrator | 2026-02-23 20:29:50.449859 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-23 20:29:50.449870 | orchestrator | Monday 23 February 2026 20:29:43 +0000 (0:00:01.769) 0:00:08.068 ******* 2026-02-23 20:29:50.449881 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-23 20:29:50.449892 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-23 20:29:50.449902 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-23 20:29:50.449913 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-23 20:29:50.449924 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-23 20:29:50.449935 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-23 20:29:50.449946 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-23 20:29:50.449957 | orchestrator | 2026-02-23 20:29:50.449968 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-23 20:29:50.449979 | orchestrator | Monday 23 February 2026 20:29:46 +0000 (0:00:02.467) 0:00:10.536 ******* 2026-02-23 20:29:50.449989 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-23 20:29:50.450000 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-23 20:29:50.450011 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-23 20:29:50.450123 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-23 20:29:50.450135 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-23 20:29:50.450146 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-23 20:29:50.450179 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-23 20:29:50.450194 | orchestrator | 2026-02-23 20:29:50.450204 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:29:50.450223 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450235 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450245 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450255 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450265 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450281 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450291 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:29:50.450301 | orchestrator | 2026-02-23 20:29:50.450311 | orchestrator | 2026-02-23 20:29:50.450321 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:29:50.450331 | orchestrator | Monday 23 February 2026 20:29:49 +0000 (0:00:03.309) 0:00:13.846 ******* 2026-02-23 20:29:50.450340 | orchestrator | =============================================================================== 2026-02-23 20:29:50.450350 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.51s 2026-02-23 20:29:50.450360 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.31s 2026-02-23 20:29:50.450370 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.47s 2026-02-23 20:29:50.450380 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.77s 2026-02-23 20:29:50.450390 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.51s 2026-02-23 20:29:50.450399 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:50.450409 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:50.450431 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task 5e44d9b2-df87-4d93-91d9-c3c0c6df0296 is in state SUCCESS 2026-02-23 20:29:50.450441 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:50.450451 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:50.450460 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:50.450470 | orchestrator | 2026-02-23 20:29:50 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:50.450480 | orchestrator | 2026-02-23 20:29:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:53.385493 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:53.385620 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:53.385661 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:29:53.385673 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:53.385684 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:53.385695 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:53.385706 | orchestrator | 2026-02-23 20:29:53 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:53.385717 | orchestrator | 2026-02-23 20:29:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:56.404307 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:56.404386 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:56.411534 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:29:56.412648 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:56.415477 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:56.421226 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:56.421869 | orchestrator | 2026-02-23 20:29:56 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:56.421897 | orchestrator | 2026-02-23 20:29:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:29:59.498219 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:29:59.498886 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:29:59.503073 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:29:59.505113 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:29:59.505139 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:29:59.505896 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:29:59.506937 | orchestrator | 2026-02-23 20:29:59 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:29:59.507096 | orchestrator | 2026-02-23 20:29:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:02.680673 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:02.680763 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:02.680774 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:02.680781 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:30:02.680789 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:02.680796 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:02.680861 | orchestrator | 2026-02-23 20:30:02 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:02.680896 | orchestrator | 2026-02-23 20:30:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:05.776442 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:05.776535 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:05.777781 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:05.778974 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:30:05.780373 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:05.781097 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:05.782754 | orchestrator | 2026-02-23 20:30:05 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:05.782961 | orchestrator | 2026-02-23 20:30:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:08.900223 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:08.900281 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:08.902820 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:08.903416 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:30:08.904351 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:08.905144 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:08.906084 | orchestrator | 2026-02-23 20:30:08 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:08.906116 | orchestrator | 2026-02-23 20:30:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:11.965072 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:11.965137 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:11.965539 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:11.967736 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state STARTED 2026-02-23 20:30:11.975236 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:11.976089 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:11.976116 | orchestrator | 2026-02-23 20:30:11 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:11.976123 | orchestrator | 2026-02-23 20:30:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:15.060351 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:15.060403 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:15.060425 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:15.060430 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task 3b475625-1ca8-4260-94d6-850033f04d70 is in state SUCCESS 2026-02-23 20:30:15.060434 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:15.060438 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:15.060442 | orchestrator | 2026-02-23 20:30:15 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:15.060446 | orchestrator | 2026-02-23 20:30:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:18.207326 | orchestrator | 2026-02-23 20:30:18 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:18.207410 | orchestrator | 2026-02-23 20:30:18 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:18.207420 | orchestrator | 2026-02-23 20:30:18 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:18.207427 | orchestrator | 2026-02-23 20:30:18 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:18.207434 | orchestrator | 2026-02-23 20:30:18 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:18.207452 | orchestrator | 2026-02-23 20:30:18 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:18.207465 | orchestrator | 2026-02-23 20:30:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:21.535883 | orchestrator | 2026-02-23 20:30:21 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:21.535967 | orchestrator | 2026-02-23 20:30:21 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:21.535974 | orchestrator | 2026-02-23 20:30:21 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:21.535979 | orchestrator | 2026-02-23 20:30:21 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:21.535983 | orchestrator | 2026-02-23 20:30:21 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:21.535987 | orchestrator | 2026-02-23 20:30:21 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:21.535991 | orchestrator | 2026-02-23 20:30:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:24.454452 | orchestrator | 2026-02-23 20:30:24 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:24.454751 | orchestrator | 2026-02-23 20:30:24 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state STARTED 2026-02-23 20:30:24.454766 | orchestrator | 2026-02-23 20:30:24 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:24.454773 | orchestrator | 2026-02-23 20:30:24 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:24.454778 | orchestrator | 2026-02-23 20:30:24 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:24.454784 | orchestrator | 2026-02-23 20:30:24 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:24.454791 | orchestrator | 2026-02-23 20:30:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:27.341830 | orchestrator | 2026-02-23 20:30:27 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:27.341949 | orchestrator | 2026-02-23 20:30:27 | INFO  | Task b84f0933-4922-4ac9-8835-1e38e1094320 is in state SUCCESS 2026-02-23 20:30:27.341966 | orchestrator | 2026-02-23 20:30:27 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:27.342433 | orchestrator | 2026-02-23 20:30:27 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:27.344413 | orchestrator | 2026-02-23 20:30:27 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:27.345267 | orchestrator | 2026-02-23 20:30:27 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:27.345293 | orchestrator | 2026-02-23 20:30:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:30.388798 | orchestrator | 2026-02-23 20:30:30 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:30.391474 | orchestrator | 2026-02-23 20:30:30 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:30.393658 | orchestrator | 2026-02-23 20:30:30 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:30.397277 | orchestrator | 2026-02-23 20:30:30 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:30.400101 | orchestrator | 2026-02-23 20:30:30 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:30.400298 | orchestrator | 2026-02-23 20:30:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:33.456473 | orchestrator | 2026-02-23 20:30:33 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:33.462076 | orchestrator | 2026-02-23 20:30:33 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:33.475745 | orchestrator | 2026-02-23 20:30:33 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:33.475840 | orchestrator | 2026-02-23 20:30:33 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:33.476707 | orchestrator | 2026-02-23 20:30:33 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:33.476729 | orchestrator | 2026-02-23 20:30:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:36.518009 | orchestrator | 2026-02-23 20:30:36 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:36.519535 | orchestrator | 2026-02-23 20:30:36 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:36.521545 | orchestrator | 2026-02-23 20:30:36 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:36.523352 | orchestrator | 2026-02-23 20:30:36 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:36.524842 | orchestrator | 2026-02-23 20:30:36 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:36.524880 | orchestrator | 2026-02-23 20:30:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:39.558887 | orchestrator | 2026-02-23 20:30:39 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:39.560253 | orchestrator | 2026-02-23 20:30:39 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:39.561983 | orchestrator | 2026-02-23 20:30:39 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:39.563836 | orchestrator | 2026-02-23 20:30:39 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:39.565634 | orchestrator | 2026-02-23 20:30:39 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:39.565661 | orchestrator | 2026-02-23 20:30:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:42.593756 | orchestrator | 2026-02-23 20:30:42 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:42.594545 | orchestrator | 2026-02-23 20:30:42 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:42.595900 | orchestrator | 2026-02-23 20:30:42 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:42.596948 | orchestrator | 2026-02-23 20:30:42 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:42.598215 | orchestrator | 2026-02-23 20:30:42 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:42.598287 | orchestrator | 2026-02-23 20:30:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:45.648307 | orchestrator | 2026-02-23 20:30:45 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:45.651431 | orchestrator | 2026-02-23 20:30:45 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:45.657218 | orchestrator | 2026-02-23 20:30:45 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:45.657303 | orchestrator | 2026-02-23 20:30:45 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:45.661065 | orchestrator | 2026-02-23 20:30:45 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:45.661132 | orchestrator | 2026-02-23 20:30:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:48.702263 | orchestrator | 2026-02-23 20:30:48 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:48.702342 | orchestrator | 2026-02-23 20:30:48 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:48.702349 | orchestrator | 2026-02-23 20:30:48 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:48.704128 | orchestrator | 2026-02-23 20:30:48 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:48.705189 | orchestrator | 2026-02-23 20:30:48 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:48.706084 | orchestrator | 2026-02-23 20:30:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:51.844096 | orchestrator | 2026-02-23 20:30:51 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:51.844199 | orchestrator | 2026-02-23 20:30:51 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state STARTED 2026-02-23 20:30:51.844215 | orchestrator | 2026-02-23 20:30:51 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:51.845131 | orchestrator | 2026-02-23 20:30:51 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:51.847748 | orchestrator | 2026-02-23 20:30:51 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:51.847792 | orchestrator | 2026-02-23 20:30:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:54.885056 | orchestrator | 2026-02-23 20:30:54 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:54.885145 | orchestrator | 2026-02-23 20:30:54 | INFO  | Task 5eb651a2-cf04-45cd-91e6-8e567ef133c6 is in state SUCCESS 2026-02-23 20:30:54.887955 | orchestrator | 2026-02-23 20:30:54.888036 | orchestrator | 2026-02-23 20:30:54.888042 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-23 20:30:54.888048 | orchestrator | 2026-02-23 20:30:54.888052 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-23 20:30:54.888058 | orchestrator | Monday 23 February 2026 20:29:35 +0000 (0:00:00.796) 0:00:00.796 ******* 2026-02-23 20:30:54.888062 | orchestrator | ok: [testbed-manager] => { 2026-02-23 20:30:54.888068 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-23 20:30:54.888074 | orchestrator | } 2026-02-23 20:30:54.888078 | orchestrator | 2026-02-23 20:30:54.888083 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-23 20:30:54.888087 | orchestrator | Monday 23 February 2026 20:29:36 +0000 (0:00:00.325) 0:00:01.122 ******* 2026-02-23 20:30:54.888091 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888096 | orchestrator | 2026-02-23 20:30:54.888100 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-23 20:30:54.888104 | orchestrator | Monday 23 February 2026 20:29:38 +0000 (0:00:02.490) 0:00:03.613 ******* 2026-02-23 20:30:54.888108 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-23 20:30:54.888113 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-23 20:30:54.888117 | orchestrator | 2026-02-23 20:30:54.888121 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-23 20:30:54.888125 | orchestrator | Monday 23 February 2026 20:29:39 +0000 (0:00:01.146) 0:00:04.759 ******* 2026-02-23 20:30:54.888129 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888133 | orchestrator | 2026-02-23 20:30:54.888137 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-23 20:30:54.888141 | orchestrator | Monday 23 February 2026 20:29:41 +0000 (0:00:02.027) 0:00:06.787 ******* 2026-02-23 20:30:54.888145 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888148 | orchestrator | 2026-02-23 20:30:54.888152 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-23 20:30:54.888156 | orchestrator | Monday 23 February 2026 20:29:44 +0000 (0:00:02.708) 0:00:09.495 ******* 2026-02-23 20:30:54.888160 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-23 20:30:54.888164 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888168 | orchestrator | 2026-02-23 20:30:54.888173 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-23 20:30:54.888177 | orchestrator | Monday 23 February 2026 20:30:09 +0000 (0:00:25.461) 0:00:34.956 ******* 2026-02-23 20:30:54.888181 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888185 | orchestrator | 2026-02-23 20:30:54.888189 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:30:54.888193 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:30:54.888199 | orchestrator | 2026-02-23 20:30:54.888203 | orchestrator | 2026-02-23 20:30:54.888207 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:30:54.888211 | orchestrator | Monday 23 February 2026 20:30:12 +0000 (0:00:02.505) 0:00:37.462 ******* 2026-02-23 20:30:54.888215 | orchestrator | =============================================================================== 2026-02-23 20:30:54.888255 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.46s 2026-02-23 20:30:54.888270 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.71s 2026-02-23 20:30:54.888274 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.51s 2026-02-23 20:30:54.888277 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.49s 2026-02-23 20:30:54.888281 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.03s 2026-02-23 20:30:54.888285 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.15s 2026-02-23 20:30:54.888293 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.33s 2026-02-23 20:30:54.888296 | orchestrator | 2026-02-23 20:30:54.888300 | orchestrator | 2026-02-23 20:30:54.888304 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-23 20:30:54.888308 | orchestrator | 2026-02-23 20:30:54.888311 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-23 20:30:54.888351 | orchestrator | Monday 23 February 2026 20:29:36 +0000 (0:00:00.381) 0:00:00.381 ******* 2026-02-23 20:30:54.888356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-23 20:30:54.888362 | orchestrator | 2026-02-23 20:30:54.888365 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-23 20:30:54.888369 | orchestrator | Monday 23 February 2026 20:29:36 +0000 (0:00:00.201) 0:00:00.583 ******* 2026-02-23 20:30:54.888378 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-23 20:30:54.888383 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-23 20:30:54.888386 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-23 20:30:54.888390 | orchestrator | 2026-02-23 20:30:54.888394 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-23 20:30:54.888398 | orchestrator | Monday 23 February 2026 20:29:38 +0000 (0:00:02.458) 0:00:03.041 ******* 2026-02-23 20:30:54.888402 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888406 | orchestrator | 2026-02-23 20:30:54.888409 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-23 20:30:54.888413 | orchestrator | Monday 23 February 2026 20:29:40 +0000 (0:00:01.901) 0:00:04.943 ******* 2026-02-23 20:30:54.888427 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-23 20:30:54.888431 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888435 | orchestrator | 2026-02-23 20:30:54.888439 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-23 20:30:54.888443 | orchestrator | Monday 23 February 2026 20:30:16 +0000 (0:00:35.406) 0:00:40.350 ******* 2026-02-23 20:30:54.888447 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888450 | orchestrator | 2026-02-23 20:30:54.888454 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-23 20:30:54.888458 | orchestrator | Monday 23 February 2026 20:30:18 +0000 (0:00:02.163) 0:00:42.513 ******* 2026-02-23 20:30:54.888462 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888465 | orchestrator | 2026-02-23 20:30:54.888469 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-23 20:30:54.888473 | orchestrator | Monday 23 February 2026 20:30:19 +0000 (0:00:01.355) 0:00:43.868 ******* 2026-02-23 20:30:54.888477 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888481 | orchestrator | 2026-02-23 20:30:54.888484 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-23 20:30:54.888488 | orchestrator | Monday 23 February 2026 20:30:22 +0000 (0:00:02.778) 0:00:46.647 ******* 2026-02-23 20:30:54.888492 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888496 | orchestrator | 2026-02-23 20:30:54.888500 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-23 20:30:54.888503 | orchestrator | Monday 23 February 2026 20:30:23 +0000 (0:00:00.865) 0:00:47.513 ******* 2026-02-23 20:30:54.888507 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888511 | orchestrator | 2026-02-23 20:30:54.888515 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-23 20:30:54.888519 | orchestrator | Monday 23 February 2026 20:30:24 +0000 (0:00:01.290) 0:00:48.803 ******* 2026-02-23 20:30:54.888522 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888526 | orchestrator | 2026-02-23 20:30:54.888530 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:30:54.888537 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:30:54.888541 | orchestrator | 2026-02-23 20:30:54.888545 | orchestrator | 2026-02-23 20:30:54.888549 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:30:54.888552 | orchestrator | Monday 23 February 2026 20:30:25 +0000 (0:00:00.656) 0:00:49.460 ******* 2026-02-23 20:30:54.888556 | orchestrator | =============================================================================== 2026-02-23 20:30:54.888560 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.41s 2026-02-23 20:30:54.888563 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.78s 2026-02-23 20:30:54.888567 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.46s 2026-02-23 20:30:54.888593 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.16s 2026-02-23 20:30:54.888597 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.90s 2026-02-23 20:30:54.888601 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.36s 2026-02-23 20:30:54.888605 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.29s 2026-02-23 20:30:54.888608 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.87s 2026-02-23 20:30:54.888612 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.66s 2026-02-23 20:30:54.888616 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.20s 2026-02-23 20:30:54.888620 | orchestrator | 2026-02-23 20:30:54.888623 | orchestrator | 2026-02-23 20:30:54.888627 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-23 20:30:54.888631 | orchestrator | 2026-02-23 20:30:54.888635 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-23 20:30:54.888638 | orchestrator | Monday 23 February 2026 20:29:55 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-02-23 20:30:54.888642 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888646 | orchestrator | 2026-02-23 20:30:54.888650 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-23 20:30:54.888653 | orchestrator | Monday 23 February 2026 20:29:56 +0000 (0:00:00.882) 0:00:01.150 ******* 2026-02-23 20:30:54.888657 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-23 20:30:54.888661 | orchestrator | 2026-02-23 20:30:54.888665 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-23 20:30:54.888668 | orchestrator | Monday 23 February 2026 20:29:57 +0000 (0:00:00.748) 0:00:01.898 ******* 2026-02-23 20:30:54.888672 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888676 | orchestrator | 2026-02-23 20:30:54.888679 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-23 20:30:54.888683 | orchestrator | Monday 23 February 2026 20:29:58 +0000 (0:00:00.972) 0:00:02.871 ******* 2026-02-23 20:30:54.888689 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-23 20:30:54.888692 | orchestrator | ok: [testbed-manager] 2026-02-23 20:30:54.888696 | orchestrator | 2026-02-23 20:30:54.888700 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-23 20:30:54.888704 | orchestrator | Monday 23 February 2026 20:30:50 +0000 (0:00:52.149) 0:00:55.020 ******* 2026-02-23 20:30:54.888708 | orchestrator | changed: [testbed-manager] 2026-02-23 20:30:54.888711 | orchestrator | 2026-02-23 20:30:54.888715 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:30:54.888719 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:30:54.888723 | orchestrator | 2026-02-23 20:30:54.888726 | orchestrator | 2026-02-23 20:30:54.888730 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:30:54.888741 | orchestrator | Monday 23 February 2026 20:30:53 +0000 (0:00:03.183) 0:00:58.203 ******* 2026-02-23 20:30:54.888744 | orchestrator | =============================================================================== 2026-02-23 20:30:54.888748 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.15s 2026-02-23 20:30:54.888752 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.18s 2026-02-23 20:30:54.888756 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.97s 2026-02-23 20:30:54.888759 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.88s 2026-02-23 20:30:54.888763 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.75s 2026-02-23 20:30:54.888767 | orchestrator | 2026-02-23 20:30:54 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:54.888771 | orchestrator | 2026-02-23 20:30:54 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:54.888775 | orchestrator | 2026-02-23 20:30:54 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:54.888779 | orchestrator | 2026-02-23 20:30:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:30:57.926405 | orchestrator | 2026-02-23 20:30:57 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:30:57.927611 | orchestrator | 2026-02-23 20:30:57 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:30:57.928929 | orchestrator | 2026-02-23 20:30:57 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:30:57.931343 | orchestrator | 2026-02-23 20:30:57 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:30:57.931394 | orchestrator | 2026-02-23 20:30:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:00.965080 | orchestrator | 2026-02-23 20:31:00 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:00.966278 | orchestrator | 2026-02-23 20:31:00 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:00.967739 | orchestrator | 2026-02-23 20:31:00 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:00.968879 | orchestrator | 2026-02-23 20:31:00 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:31:00.969164 | orchestrator | 2026-02-23 20:31:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:04.005424 | orchestrator | 2026-02-23 20:31:04 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:04.007037 | orchestrator | 2026-02-23 20:31:04 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:04.007580 | orchestrator | 2026-02-23 20:31:04 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:04.008495 | orchestrator | 2026-02-23 20:31:04 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:31:04.008732 | orchestrator | 2026-02-23 20:31:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:07.043170 | orchestrator | 2026-02-23 20:31:07 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:07.044522 | orchestrator | 2026-02-23 20:31:07 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:07.046172 | orchestrator | 2026-02-23 20:31:07 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:07.047331 | orchestrator | 2026-02-23 20:31:07 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:31:07.047374 | orchestrator | 2026-02-23 20:31:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:10.094212 | orchestrator | 2026-02-23 20:31:10 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:10.097915 | orchestrator | 2026-02-23 20:31:10 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:10.099127 | orchestrator | 2026-02-23 20:31:10 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:10.100727 | orchestrator | 2026-02-23 20:31:10 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:31:10.100749 | orchestrator | 2026-02-23 20:31:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:13.148276 | orchestrator | 2026-02-23 20:31:13 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:13.148699 | orchestrator | 2026-02-23 20:31:13 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:13.150126 | orchestrator | 2026-02-23 20:31:13 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:13.151400 | orchestrator | 2026-02-23 20:31:13 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state STARTED 2026-02-23 20:31:13.151425 | orchestrator | 2026-02-23 20:31:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:16.187138 | orchestrator | 2026-02-23 20:31:16 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:16.188138 | orchestrator | 2026-02-23 20:31:16 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:16.189118 | orchestrator | 2026-02-23 20:31:16 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:16.190142 | orchestrator | 2026-02-23 20:31:16 | INFO  | Task 02565b3e-eb3c-4793-95cb-6d254d14b874 is in state SUCCESS 2026-02-23 20:31:16.190406 | orchestrator | 2026-02-23 20:31:16.190418 | orchestrator | 2026-02-23 20:31:16.190422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:31:16.190427 | orchestrator | 2026-02-23 20:31:16.190431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:31:16.190436 | orchestrator | Monday 23 February 2026 20:29:36 +0000 (0:00:00.180) 0:00:00.180 ******* 2026-02-23 20:31:16.190440 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-23 20:31:16.190444 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-23 20:31:16.190448 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-23 20:31:16.190452 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-23 20:31:16.190456 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-23 20:31:16.190460 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-23 20:31:16.190464 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-23 20:31:16.190468 | orchestrator | 2026-02-23 20:31:16.190471 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-23 20:31:16.190475 | orchestrator | 2026-02-23 20:31:16.190479 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-23 20:31:16.190483 | orchestrator | Monday 23 February 2026 20:29:37 +0000 (0:00:00.820) 0:00:01.000 ******* 2026-02-23 20:31:16.190495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:31:16.190501 | orchestrator | 2026-02-23 20:31:16.190505 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-23 20:31:16.190520 | orchestrator | Monday 23 February 2026 20:29:39 +0000 (0:00:01.982) 0:00:02.982 ******* 2026-02-23 20:31:16.190524 | orchestrator | ok: [testbed-manager] 2026-02-23 20:31:16.190528 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:31:16.190532 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:31:16.190536 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:31:16.190540 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:31:16.190543 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:31:16.190547 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:31:16.190551 | orchestrator | 2026-02-23 20:31:16.190554 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-23 20:31:16.190558 | orchestrator | Monday 23 February 2026 20:29:41 +0000 (0:00:02.053) 0:00:05.036 ******* 2026-02-23 20:31:16.190562 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:31:16.190566 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:31:16.190569 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:31:16.190573 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:31:16.190577 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:31:16.190580 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:31:16.190584 | orchestrator | ok: [testbed-manager] 2026-02-23 20:31:16.190588 | orchestrator | 2026-02-23 20:31:16.190592 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-23 20:31:16.190595 | orchestrator | Monday 23 February 2026 20:29:44 +0000 (0:00:03.190) 0:00:08.227 ******* 2026-02-23 20:31:16.190599 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:16.190626 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:16.190631 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:16.190635 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:16.190638 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:16.190642 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:16.190646 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:16.190650 | orchestrator | 2026-02-23 20:31:16.190654 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-23 20:31:16.190657 | orchestrator | Monday 23 February 2026 20:29:46 +0000 (0:00:02.180) 0:00:10.407 ******* 2026-02-23 20:31:16.190661 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:16.190665 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:16.190669 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:16.190673 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:16.190676 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:16.190680 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:16.190684 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:16.190688 | orchestrator | 2026-02-23 20:31:16.190692 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-23 20:31:16.190696 | orchestrator | Monday 23 February 2026 20:29:58 +0000 (0:00:11.537) 0:00:21.944 ******* 2026-02-23 20:31:16.190699 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:16.190703 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:16.190707 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:16.190711 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:16.190714 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:16.190718 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:16.190722 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:16.190726 | orchestrator | 2026-02-23 20:31:16.190729 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-23 20:31:16.190733 | orchestrator | Monday 23 February 2026 20:30:47 +0000 (0:00:48.686) 0:01:10.631 ******* 2026-02-23 20:31:16.190738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:31:16.190742 | orchestrator | 2026-02-23 20:31:16.190746 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-23 20:31:16.190750 | orchestrator | Monday 23 February 2026 20:30:48 +0000 (0:00:01.123) 0:01:11.755 ******* 2026-02-23 20:31:16.190757 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-23 20:31:16.190761 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-23 20:31:16.190765 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-23 20:31:16.190769 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-23 20:31:16.190778 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-23 20:31:16.190782 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-23 20:31:16.190786 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-23 20:31:16.190808 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-23 20:31:16.190812 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-23 20:31:16.190816 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-23 20:31:16.190819 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-23 20:31:16.190823 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-23 20:31:16.190827 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-23 20:31:16.190830 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-23 20:31:16.190834 | orchestrator | 2026-02-23 20:31:16.190838 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-23 20:31:16.190843 | orchestrator | Monday 23 February 2026 20:30:52 +0000 (0:00:04.702) 0:01:16.457 ******* 2026-02-23 20:31:16.190846 | orchestrator | ok: [testbed-manager] 2026-02-23 20:31:16.190850 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:31:16.190854 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:31:16.190858 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:31:16.190862 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:31:16.190865 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:31:16.190869 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:31:16.190873 | orchestrator | 2026-02-23 20:31:16.190877 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-23 20:31:16.190881 | orchestrator | Monday 23 February 2026 20:30:54 +0000 (0:00:01.088) 0:01:17.546 ******* 2026-02-23 20:31:16.190884 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:16.190888 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:16.190892 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:16.190896 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:16.190899 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:16.190903 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:16.190907 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:16.190913 | orchestrator | 2026-02-23 20:31:16.190919 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-23 20:31:16.190925 | orchestrator | Monday 23 February 2026 20:30:55 +0000 (0:00:01.399) 0:01:18.945 ******* 2026-02-23 20:31:16.190931 | orchestrator | ok: [testbed-manager] 2026-02-23 20:31:16.190937 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:31:16.190943 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:31:16.190949 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:31:16.190955 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:31:16.190961 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:31:16.190968 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:31:16.190974 | orchestrator | 2026-02-23 20:31:16.190980 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-23 20:31:16.190986 | orchestrator | Monday 23 February 2026 20:30:56 +0000 (0:00:01.560) 0:01:20.506 ******* 2026-02-23 20:31:16.190993 | orchestrator | ok: [testbed-manager] 2026-02-23 20:31:16.190999 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:31:16.191005 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:31:16.191009 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:31:16.191012 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:31:16.191016 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:31:16.191020 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:31:16.191023 | orchestrator | 2026-02-23 20:31:16.191027 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-23 20:31:16.191035 | orchestrator | Monday 23 February 2026 20:30:58 +0000 (0:00:01.872) 0:01:22.379 ******* 2026-02-23 20:31:16.191039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-23 20:31:16.191044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:31:16.191048 | orchestrator | 2026-02-23 20:31:16.191054 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-23 20:31:16.191059 | orchestrator | Monday 23 February 2026 20:31:00 +0000 (0:00:01.228) 0:01:23.608 ******* 2026-02-23 20:31:16.191063 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:16.191067 | orchestrator | 2026-02-23 20:31:16.191072 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-23 20:31:16.191076 | orchestrator | Monday 23 February 2026 20:31:01 +0000 (0:00:01.681) 0:01:25.289 ******* 2026-02-23 20:31:16.191080 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:16.191085 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:16.191089 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:16.191093 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:16.191097 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:16.191101 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:16.191106 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:16.191110 | orchestrator | 2026-02-23 20:31:16.191114 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:31:16.191119 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191124 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191128 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191132 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191140 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191145 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191149 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:31:16.191153 | orchestrator | 2026-02-23 20:31:16.191157 | orchestrator | 2026-02-23 20:31:16.191162 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:31:16.191166 | orchestrator | Monday 23 February 2026 20:31:12 +0000 (0:00:11.139) 0:01:36.429 ******* 2026-02-23 20:31:16.191170 | orchestrator | =============================================================================== 2026-02-23 20:31:16.191174 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 48.69s 2026-02-23 20:31:16.191178 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.54s 2026-02-23 20:31:16.191182 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.14s 2026-02-23 20:31:16.191186 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.70s 2026-02-23 20:31:16.191191 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.19s 2026-02-23 20:31:16.191195 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.18s 2026-02-23 20:31:16.191199 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.05s 2026-02-23 20:31:16.191205 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.98s 2026-02-23 20:31:16.191209 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.87s 2026-02-23 20:31:16.191213 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.68s 2026-02-23 20:31:16.191218 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.56s 2026-02-23 20:31:16.191222 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.40s 2026-02-23 20:31:16.191226 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.23s 2026-02-23 20:31:16.191230 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.12s 2026-02-23 20:31:16.191234 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.09s 2026-02-23 20:31:16.191239 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-02-23 20:31:16.191243 | orchestrator | 2026-02-23 20:31:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:19.234160 | orchestrator | 2026-02-23 20:31:19 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:19.237328 | orchestrator | 2026-02-23 20:31:19 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:19.240282 | orchestrator | 2026-02-23 20:31:19 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:19.240319 | orchestrator | 2026-02-23 20:31:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:22.284977 | orchestrator | 2026-02-23 20:31:22 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:22.285980 | orchestrator | 2026-02-23 20:31:22 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:22.288799 | orchestrator | 2026-02-23 20:31:22 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:22.288836 | orchestrator | 2026-02-23 20:31:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:25.343355 | orchestrator | 2026-02-23 20:31:25 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:25.343419 | orchestrator | 2026-02-23 20:31:25 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:25.343428 | orchestrator | 2026-02-23 20:31:25 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:25.343435 | orchestrator | 2026-02-23 20:31:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:28.379980 | orchestrator | 2026-02-23 20:31:28 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:28.381251 | orchestrator | 2026-02-23 20:31:28 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:28.383049 | orchestrator | 2026-02-23 20:31:28 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:28.383087 | orchestrator | 2026-02-23 20:31:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:31.488498 | orchestrator | 2026-02-23 20:31:31 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:31.489294 | orchestrator | 2026-02-23 20:31:31 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:31.489897 | orchestrator | 2026-02-23 20:31:31 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:31.490081 | orchestrator | 2026-02-23 20:31:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:34.521138 | orchestrator | 2026-02-23 20:31:34 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:34.522671 | orchestrator | 2026-02-23 20:31:34 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:34.524915 | orchestrator | 2026-02-23 20:31:34 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:34.525340 | orchestrator | 2026-02-23 20:31:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:37.564045 | orchestrator | 2026-02-23 20:31:37 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:37.564118 | orchestrator | 2026-02-23 20:31:37 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:37.566445 | orchestrator | 2026-02-23 20:31:37 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:37.566536 | orchestrator | 2026-02-23 20:31:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:40.616398 | orchestrator | 2026-02-23 20:31:40 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:40.617476 | orchestrator | 2026-02-23 20:31:40 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state STARTED 2026-02-23 20:31:40.619500 | orchestrator | 2026-02-23 20:31:40 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:40.619976 | orchestrator | 2026-02-23 20:31:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:43.655539 | orchestrator | 2026-02-23 20:31:43 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:43.655752 | orchestrator | 2026-02-23 20:31:43 | INFO  | Task 1f33f713-bddd-4274-9a11-f2ce7ed586fa is in state SUCCESS 2026-02-23 20:31:43.659384 | orchestrator | 2026-02-23 20:31:43.659449 | orchestrator | 2026-02-23 20:31:43.659456 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-23 20:31:43.659461 | orchestrator | 2026-02-23 20:31:43.659466 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-23 20:31:43.659470 | orchestrator | Monday 23 February 2026 20:29:28 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-23 20:31:43.659476 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:31:43.659481 | orchestrator | 2026-02-23 20:31:43.659485 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-23 20:31:43.659489 | orchestrator | Monday 23 February 2026 20:29:29 +0000 (0:00:01.118) 0:00:01.375 ******* 2026-02-23 20:31:43.659493 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659497 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659501 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659509 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659514 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659517 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659521 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659526 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659530 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659534 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-23 20:31:43.659538 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659553 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659557 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659561 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659565 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659570 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659574 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-23 20:31:43.659578 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659582 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659586 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659591 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-23 20:31:43.659595 | orchestrator | 2026-02-23 20:31:43.659599 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-23 20:31:43.659604 | orchestrator | Monday 23 February 2026 20:29:33 +0000 (0:00:03.863) 0:00:05.238 ******* 2026-02-23 20:31:43.659608 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:31:43.659680 | orchestrator | 2026-02-23 20:31:43.659687 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-23 20:31:43.659693 | orchestrator | Monday 23 February 2026 20:29:34 +0000 (0:00:01.342) 0:00:06.580 ******* 2026-02-23 20:31:43.659703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.659713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.660226 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.660261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.660278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.660282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.660296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.660312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660325 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660363 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.660384 | orchestrator | 2026-02-23 20:31:43.660388 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-23 20:31:43.660392 | orchestrator | Monday 23 February 2026 20:29:40 +0000 (0:00:05.314) 0:00:11.895 ******* 2026-02-23 20:31:43.660397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660401 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660432 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:31:43.660436 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:31:43.660441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660698 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:31:43.660710 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:31:43.660736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660761 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:31:43.660767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660787 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:31:43.660792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660820 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:31:43.660826 | orchestrator | 2026-02-23 20:31:43.660832 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-23 20:31:43.660838 | orchestrator | Monday 23 February 2026 20:29:41 +0000 (0:00:01.815) 0:00:13.710 ******* 2026-02-23 20:31:43.660846 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660865 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:31:43.660870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660921 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:31:43.660927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660945 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:31:43.660951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.660970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.660982 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:31:43.660988 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:31:43.660997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.661003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661015 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:31:43.661022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-23 20:31:43.661030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661045 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:31:43.661050 | orchestrator | 2026-02-23 20:31:43.661055 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-23 20:31:43.661061 | orchestrator | Monday 23 February 2026 20:29:45 +0000 (0:00:03.555) 0:00:17.266 ******* 2026-02-23 20:31:43.661066 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:31:43.661072 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:31:43.661078 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:31:43.661084 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:31:43.661096 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:31:43.661117 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:31:43.661123 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:31:43.661137 | orchestrator | 2026-02-23 20:31:43.661152 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-23 20:31:43.661167 | orchestrator | Monday 23 February 2026 20:29:46 +0000 (0:00:01.458) 0:00:18.725 ******* 2026-02-23 20:31:43.661181 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:31:43.661197 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:31:43.661211 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:31:43.661225 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:31:43.661241 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:31:43.661260 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:31:43.661277 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:31:43.661294 | orchestrator | 2026-02-23 20:31:43.661315 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-23 20:31:43.661334 | orchestrator | Monday 23 February 2026 20:29:48 +0000 (0:00:01.368) 0:00:20.094 ******* 2026-02-23 20:31:43.661355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661368 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661461 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661502 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661509 | orchestrator | 2026-02-23 20:31:43.661516 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-23 20:31:43.661523 | orchestrator | Monday 23 February 2026 20:29:55 +0000 (0:00:07.383) 0:00:27.477 ******* 2026-02-23 20:31:43.661529 | orchestrator | [WARNING]: Skipped 2026-02-23 20:31:43.661538 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-23 20:31:43.661544 | orchestrator | to this access issue: 2026-02-23 20:31:43.661548 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-23 20:31:43.661552 | orchestrator | directory 2026-02-23 20:31:43.661556 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:31:43.661560 | orchestrator | 2026-02-23 20:31:43.661566 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-23 20:31:43.661573 | orchestrator | Monday 23 February 2026 20:29:57 +0000 (0:00:01.461) 0:00:28.938 ******* 2026-02-23 20:31:43.661577 | orchestrator | [WARNING]: Skipped 2026-02-23 20:31:43.661581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-23 20:31:43.661588 | orchestrator | to this access issue: 2026-02-23 20:31:43.661592 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-23 20:31:43.661596 | orchestrator | directory 2026-02-23 20:31:43.661600 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:31:43.661604 | orchestrator | 2026-02-23 20:31:43.661608 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-23 20:31:43.661612 | orchestrator | Monday 23 February 2026 20:29:57 +0000 (0:00:00.913) 0:00:29.852 ******* 2026-02-23 20:31:43.661616 | orchestrator | [WARNING]: Skipped 2026-02-23 20:31:43.661652 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-23 20:31:43.661658 | orchestrator | to this access issue: 2026-02-23 20:31:43.661662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-23 20:31:43.661666 | orchestrator | directory 2026-02-23 20:31:43.661670 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:31:43.661677 | orchestrator | 2026-02-23 20:31:43.661681 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-23 20:31:43.661685 | orchestrator | Monday 23 February 2026 20:29:59 +0000 (0:00:01.071) 0:00:30.923 ******* 2026-02-23 20:31:43.661689 | orchestrator | [WARNING]: Skipped 2026-02-23 20:31:43.661695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-23 20:31:43.661698 | orchestrator | to this access issue: 2026-02-23 20:31:43.661702 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-23 20:31:43.661706 | orchestrator | directory 2026-02-23 20:31:43.661710 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:31:43.661714 | orchestrator | 2026-02-23 20:31:43.661717 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-23 20:31:43.661721 | orchestrator | Monday 23 February 2026 20:30:00 +0000 (0:00:01.327) 0:00:32.251 ******* 2026-02-23 20:31:43.661725 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.661729 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.661734 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.661740 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.661746 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.661751 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.661756 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.661762 | orchestrator | 2026-02-23 20:31:43.661767 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-23 20:31:43.661772 | orchestrator | Monday 23 February 2026 20:30:06 +0000 (0:00:05.643) 0:00:37.895 ******* 2026-02-23 20:31:43.661778 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661785 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661790 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661802 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661808 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-23 20:31:43.661819 | orchestrator | 2026-02-23 20:31:43.661825 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-23 20:31:43.661831 | orchestrator | Monday 23 February 2026 20:30:09 +0000 (0:00:03.243) 0:00:41.138 ******* 2026-02-23 20:31:43.661837 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.661843 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.661849 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.661855 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.661862 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.661868 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.661873 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.661879 | orchestrator | 2026-02-23 20:31:43.661885 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-23 20:31:43.661892 | orchestrator | Monday 23 February 2026 20:30:13 +0000 (0:00:04.367) 0:00:45.506 ******* 2026-02-23 20:31:43.661899 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661911 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661921 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661932 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661945 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661960 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661964 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.661979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.661983 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661987 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.661996 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:31:43.662009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662070 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662079 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662086 | orchestrator | 2026-02-23 20:31:43.662092 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-23 20:31:43.662098 | orchestrator | Monday 23 February 2026 20:30:15 +0000 (0:00:02.331) 0:00:47.837 ******* 2026-02-23 20:31:43.662104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662112 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662118 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662130 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662135 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662141 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-23 20:31:43.662147 | orchestrator | 2026-02-23 20:31:43.662152 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-23 20:31:43.662158 | orchestrator | Monday 23 February 2026 20:30:20 +0000 (0:00:04.230) 0:00:52.068 ******* 2026-02-23 20:31:43.662170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662182 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662187 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662193 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662205 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-23 20:31:43.662211 | orchestrator | 2026-02-23 20:31:43.662216 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-23 20:31:43.662223 | orchestrator | Monday 23 February 2026 20:30:23 +0000 (0:00:03.097) 0:00:55.165 ******* 2026-02-23 20:31:43.662229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662280 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-23 20:31:43.662284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662314 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662322 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662346 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:31:43.662361 | orchestrator | 2026-02-23 20:31:43.662365 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-23 20:31:43.662369 | orchestrator | Monday 23 February 2026 20:30:27 +0000 (0:00:03.861) 0:00:59.026 ******* 2026-02-23 20:31:43.662374 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.662380 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.662386 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.662392 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.662398 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.662405 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.662410 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.662416 | orchestrator | 2026-02-23 20:31:43.662422 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-23 20:31:43.662429 | orchestrator | Monday 23 February 2026 20:30:28 +0000 (0:00:01.763) 0:01:00.790 ******* 2026-02-23 20:31:43.662435 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.662441 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.662446 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.662452 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.662458 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.662465 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.662471 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.662477 | orchestrator | 2026-02-23 20:31:43.662484 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662491 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:01.269) 0:01:02.059 ******* 2026-02-23 20:31:43.662497 | orchestrator | 2026-02-23 20:31:43.662503 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662508 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.063) 0:01:02.123 ******* 2026-02-23 20:31:43.662515 | orchestrator | 2026-02-23 20:31:43.662521 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662527 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.060) 0:01:02.184 ******* 2026-02-23 20:31:43.662533 | orchestrator | 2026-02-23 20:31:43.662539 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662545 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.173) 0:01:02.358 ******* 2026-02-23 20:31:43.662552 | orchestrator | 2026-02-23 20:31:43.662558 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662564 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.061) 0:01:02.419 ******* 2026-02-23 20:31:43.662571 | orchestrator | 2026-02-23 20:31:43.662577 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662584 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.059) 0:01:02.479 ******* 2026-02-23 20:31:43.662590 | orchestrator | 2026-02-23 20:31:43.662595 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-23 20:31:43.662601 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.059) 0:01:02.539 ******* 2026-02-23 20:31:43.662608 | orchestrator | 2026-02-23 20:31:43.662614 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-23 20:31:43.662648 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:00.086) 0:01:02.625 ******* 2026-02-23 20:31:43.662656 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.662662 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.662669 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.662675 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.662682 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.662694 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.662700 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.662706 | orchestrator | 2026-02-23 20:31:43.662713 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-23 20:31:43.662719 | orchestrator | Monday 23 February 2026 20:31:01 +0000 (0:00:31.198) 0:01:33.824 ******* 2026-02-23 20:31:43.662726 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.662733 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.662739 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.662746 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.662752 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.662758 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.662765 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.662771 | orchestrator | 2026-02-23 20:31:43.662778 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-23 20:31:43.662784 | orchestrator | Monday 23 February 2026 20:31:35 +0000 (0:00:33.283) 0:02:07.108 ******* 2026-02-23 20:31:43.662790 | orchestrator | ok: [testbed-manager] 2026-02-23 20:31:43.662801 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:31:43.662807 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:31:43.662813 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:31:43.662820 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:31:43.662826 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:31:43.662833 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:31:43.662839 | orchestrator | 2026-02-23 20:31:43.662846 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-23 20:31:43.662852 | orchestrator | Monday 23 February 2026 20:31:37 +0000 (0:00:02.180) 0:02:09.289 ******* 2026-02-23 20:31:43.662859 | orchestrator | changed: [testbed-manager] 2026-02-23 20:31:43.662865 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:31:43.662872 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:31:43.662878 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:31:43.662884 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:31:43.662890 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:31:43.662897 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:31:43.662903 | orchestrator | 2026-02-23 20:31:43.662910 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:31:43.662917 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662925 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662930 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662937 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662943 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662950 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662956 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 20:31:43.662962 | orchestrator | 2026-02-23 20:31:43.662968 | orchestrator | 2026-02-23 20:31:43.662975 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:31:43.662981 | orchestrator | Monday 23 February 2026 20:31:42 +0000 (0:00:05.128) 0:02:14.418 ******* 2026-02-23 20:31:43.662988 | orchestrator | =============================================================================== 2026-02-23 20:31:43.663000 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.28s 2026-02-23 20:31:43.663006 | orchestrator | common : Restart fluentd container ------------------------------------- 31.20s 2026-02-23 20:31:43.663013 | orchestrator | common : Copying over config.json files for services -------------------- 7.38s 2026-02-23 20:31:43.663019 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.64s 2026-02-23 20:31:43.663025 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.31s 2026-02-23 20:31:43.663032 | orchestrator | common : Restart cron container ----------------------------------------- 5.13s 2026-02-23 20:31:43.663038 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.37s 2026-02-23 20:31:43.663044 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.23s 2026-02-23 20:31:43.663049 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.86s 2026-02-23 20:31:43.663056 | orchestrator | common : Check common containers ---------------------------------------- 3.86s 2026-02-23 20:31:43.663062 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.56s 2026-02-23 20:31:43.663067 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.24s 2026-02-23 20:31:43.663073 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.10s 2026-02-23 20:31:43.663079 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.33s 2026-02-23 20:31:43.663092 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.18s 2026-02-23 20:31:43.663100 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.82s 2026-02-23 20:31:43.663106 | orchestrator | common : Creating log volume -------------------------------------------- 1.76s 2026-02-23 20:31:43.663112 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.46s 2026-02-23 20:31:43.663119 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.46s 2026-02-23 20:31:43.663125 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.37s 2026-02-23 20:31:43.663132 | orchestrator | 2026-02-23 20:31:43 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:43.663139 | orchestrator | 2026-02-23 20:31:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:46.690867 | orchestrator | 2026-02-23 20:31:46 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:46.690970 | orchestrator | 2026-02-23 20:31:46 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:31:46.693123 | orchestrator | 2026-02-23 20:31:46 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:31:46.693805 | orchestrator | 2026-02-23 20:31:46 | INFO  | Task 8a349e84-5fc9-4b12-85c7-26d7de3aa8ff is in state STARTED 2026-02-23 20:31:46.694550 | orchestrator | 2026-02-23 20:31:46 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:31:46.695067 | orchestrator | 2026-02-23 20:31:46 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:46.695129 | orchestrator | 2026-02-23 20:31:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:49.720160 | orchestrator | 2026-02-23 20:31:49 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:49.720304 | orchestrator | 2026-02-23 20:31:49 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:31:49.720868 | orchestrator | 2026-02-23 20:31:49 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:31:49.723239 | orchestrator | 2026-02-23 20:31:49 | INFO  | Task 8a349e84-5fc9-4b12-85c7-26d7de3aa8ff is in state STARTED 2026-02-23 20:31:49.723733 | orchestrator | 2026-02-23 20:31:49 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:31:49.724411 | orchestrator | 2026-02-23 20:31:49 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:49.724447 | orchestrator | 2026-02-23 20:31:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:52.742365 | orchestrator | 2026-02-23 20:31:52 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:52.743077 | orchestrator | 2026-02-23 20:31:52 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:31:52.743888 | orchestrator | 2026-02-23 20:31:52 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:31:52.745033 | orchestrator | 2026-02-23 20:31:52 | INFO  | Task 8a349e84-5fc9-4b12-85c7-26d7de3aa8ff is in state STARTED 2026-02-23 20:31:52.745226 | orchestrator | 2026-02-23 20:31:52 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:31:52.746218 | orchestrator | 2026-02-23 20:31:52 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:52.746256 | orchestrator | 2026-02-23 20:31:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:55.775418 | orchestrator | 2026-02-23 20:31:55 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:55.777944 | orchestrator | 2026-02-23 20:31:55 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:31:55.780276 | orchestrator | 2026-02-23 20:31:55 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:31:55.782253 | orchestrator | 2026-02-23 20:31:55 | INFO  | Task 8a349e84-5fc9-4b12-85c7-26d7de3aa8ff is in state STARTED 2026-02-23 20:31:55.784472 | orchestrator | 2026-02-23 20:31:55 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:31:55.786236 | orchestrator | 2026-02-23 20:31:55 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:55.786434 | orchestrator | 2026-02-23 20:31:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:31:58.843119 | orchestrator | 2026-02-23 20:31:58 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:31:58.846617 | orchestrator | 2026-02-23 20:31:58 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:31:58.847140 | orchestrator | 2026-02-23 20:31:58 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:31:58.849059 | orchestrator | 2026-02-23 20:31:58 | INFO  | Task 8a349e84-5fc9-4b12-85c7-26d7de3aa8ff is in state STARTED 2026-02-23 20:31:58.850286 | orchestrator | 2026-02-23 20:31:58 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:31:58.852070 | orchestrator | 2026-02-23 20:31:58 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:31:58.852315 | orchestrator | 2026-02-23 20:31:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:01.921996 | orchestrator | 2026-02-23 20:32:01 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:01.922075 | orchestrator | 2026-02-23 20:32:01 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:01.933730 | orchestrator | 2026-02-23 20:32:01 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:32:01.933777 | orchestrator | 2026-02-23 20:32:01 | INFO  | Task 8a349e84-5fc9-4b12-85c7-26d7de3aa8ff is in state SUCCESS 2026-02-23 20:32:01.933783 | orchestrator | 2026-02-23 20:32:01 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:01.933801 | orchestrator | 2026-02-23 20:32:01 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:01.933806 | orchestrator | 2026-02-23 20:32:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:05.120588 | orchestrator | 2026-02-23 20:32:05 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:05.120647 | orchestrator | 2026-02-23 20:32:05 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:05.120653 | orchestrator | 2026-02-23 20:32:05 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:05.120657 | orchestrator | 2026-02-23 20:32:05 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:32:05.120661 | orchestrator | 2026-02-23 20:32:05 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:05.120664 | orchestrator | 2026-02-23 20:32:05 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:05.120668 | orchestrator | 2026-02-23 20:32:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:08.156535 | orchestrator | 2026-02-23 20:32:08 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:08.157175 | orchestrator | 2026-02-23 20:32:08 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:08.159052 | orchestrator | 2026-02-23 20:32:08 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:08.160929 | orchestrator | 2026-02-23 20:32:08 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:32:08.162053 | orchestrator | 2026-02-23 20:32:08 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:08.168251 | orchestrator | 2026-02-23 20:32:08 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:08.168330 | orchestrator | 2026-02-23 20:32:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:11.216049 | orchestrator | 2026-02-23 20:32:11 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:11.218083 | orchestrator | 2026-02-23 20:32:11 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:11.218884 | orchestrator | 2026-02-23 20:32:11 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:11.222198 | orchestrator | 2026-02-23 20:32:11 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:32:11.225712 | orchestrator | 2026-02-23 20:32:11 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:11.229314 | orchestrator | 2026-02-23 20:32:11 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:11.229353 | orchestrator | 2026-02-23 20:32:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:14.273512 | orchestrator | 2026-02-23 20:32:14 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:14.275921 | orchestrator | 2026-02-23 20:32:14 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:14.277396 | orchestrator | 2026-02-23 20:32:14 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:14.278148 | orchestrator | 2026-02-23 20:32:14 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:32:14.279880 | orchestrator | 2026-02-23 20:32:14 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:14.281851 | orchestrator | 2026-02-23 20:32:14 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:14.281897 | orchestrator | 2026-02-23 20:32:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:17.322051 | orchestrator | 2026-02-23 20:32:17 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:17.325131 | orchestrator | 2026-02-23 20:32:17 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:17.325235 | orchestrator | 2026-02-23 20:32:17 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:17.325919 | orchestrator | 2026-02-23 20:32:17 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state STARTED 2026-02-23 20:32:17.329045 | orchestrator | 2026-02-23 20:32:17 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:17.329713 | orchestrator | 2026-02-23 20:32:17 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:17.329753 | orchestrator | 2026-02-23 20:32:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:20.421724 | orchestrator | 2026-02-23 20:32:20 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:20.422443 | orchestrator | 2026-02-23 20:32:20 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:20.424007 | orchestrator | 2026-02-23 20:32:20 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:20.425922 | orchestrator | 2026-02-23 20:32:20 | INFO  | Task bd2c3e04-4ddb-4a3e-a647-a0903ffb9438 is in state SUCCESS 2026-02-23 20:32:20.427049 | orchestrator | 2026-02-23 20:32:20.427082 | orchestrator | 2026-02-23 20:32:20.427090 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:32:20.427098 | orchestrator | 2026-02-23 20:32:20.427105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:32:20.427112 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.603) 0:00:00.603 ******* 2026-02-23 20:32:20.427119 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:32:20.427126 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:32:20.427133 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:32:20.427139 | orchestrator | 2026-02-23 20:32:20.427154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:32:20.427161 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.585) 0:00:01.188 ******* 2026-02-23 20:32:20.427168 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-23 20:32:20.427175 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-23 20:32:20.427181 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-23 20:32:20.427187 | orchestrator | 2026-02-23 20:32:20.427194 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-23 20:32:20.427200 | orchestrator | 2026-02-23 20:32:20.427207 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-23 20:32:20.427213 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.556) 0:00:01.744 ******* 2026-02-23 20:32:20.427220 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:32:20.427227 | orchestrator | 2026-02-23 20:32:20.427233 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-23 20:32:20.427239 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.576) 0:00:02.321 ******* 2026-02-23 20:32:20.427247 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-23 20:32:20.427253 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-23 20:32:20.427260 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-23 20:32:20.427282 | orchestrator | 2026-02-23 20:32:20.427289 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-23 20:32:20.427295 | orchestrator | Monday 23 February 2026 20:31:52 +0000 (0:00:01.033) 0:00:03.354 ******* 2026-02-23 20:32:20.427302 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-23 20:32:20.427307 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-23 20:32:20.427314 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-23 20:32:20.427320 | orchestrator | 2026-02-23 20:32:20.427326 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-23 20:32:20.427332 | orchestrator | Monday 23 February 2026 20:31:54 +0000 (0:00:01.900) 0:00:05.255 ******* 2026-02-23 20:32:20.427339 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:32:20.427346 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:32:20.427351 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:32:20.427358 | orchestrator | 2026-02-23 20:32:20.427364 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-23 20:32:20.427370 | orchestrator | Monday 23 February 2026 20:31:56 +0000 (0:00:01.762) 0:00:07.017 ******* 2026-02-23 20:32:20.427376 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:32:20.427382 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:32:20.427389 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:32:20.427394 | orchestrator | 2026-02-23 20:32:20.427401 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:32:20.427407 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:32:20.427479 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:32:20.427489 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:32:20.427495 | orchestrator | 2026-02-23 20:32:20.427501 | orchestrator | 2026-02-23 20:32:20.427507 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:32:20.427514 | orchestrator | Monday 23 February 2026 20:32:00 +0000 (0:00:03.687) 0:00:10.705 ******* 2026-02-23 20:32:20.427529 | orchestrator | =============================================================================== 2026-02-23 20:32:20.427535 | orchestrator | memcached : Restart memcached container --------------------------------- 3.69s 2026-02-23 20:32:20.427543 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.90s 2026-02-23 20:32:20.427546 | orchestrator | memcached : Check memcached container ----------------------------------- 1.76s 2026-02-23 20:32:20.427550 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.03s 2026-02-23 20:32:20.427554 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2026-02-23 20:32:20.427558 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.58s 2026-02-23 20:32:20.427561 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-02-23 20:32:20.427565 | orchestrator | 2026-02-23 20:32:20.427569 | orchestrator | 2026-02-23 20:32:20.427573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:32:20.427576 | orchestrator | 2026-02-23 20:32:20.427580 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:32:20.427584 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-23 20:32:20.427588 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:32:20.427592 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:32:20.427595 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:32:20.427599 | orchestrator | 2026-02-23 20:32:20.427603 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:32:20.427636 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.413) 0:00:00.673 ******* 2026-02-23 20:32:20.427647 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-23 20:32:20.427651 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-23 20:32:20.427655 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-23 20:32:20.427659 | orchestrator | 2026-02-23 20:32:20.427663 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-23 20:32:20.427667 | orchestrator | 2026-02-23 20:32:20.427671 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-23 20:32:20.427676 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.607) 0:00:01.281 ******* 2026-02-23 20:32:20.427682 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:32:20.427689 | orchestrator | 2026-02-23 20:32:20.427695 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-23 20:32:20.427702 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.425) 0:00:01.706 ******* 2026-02-23 20:32:20.427710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427761 | orchestrator | 2026-02-23 20:32:20.427767 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-23 20:32:20.427773 | orchestrator | Monday 23 February 2026 20:31:53 +0000 (0:00:01.583) 0:00:03.289 ******* 2026-02-23 20:32:20.427779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427830 | orchestrator | 2026-02-23 20:32:20.427836 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-23 20:32:20.427842 | orchestrator | Monday 23 February 2026 20:31:55 +0000 (0:00:02.371) 0:00:05.661 ******* 2026-02-23 20:32:20.427849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427901 | orchestrator | 2026-02-23 20:32:20.427910 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-23 20:32:20.427917 | orchestrator | Monday 23 February 2026 20:31:58 +0000 (0:00:02.905) 0:00:08.567 ******* 2026-02-23 20:32:20.427923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-23 20:32:20.427970 | orchestrator | 2026-02-23 20:32:20.427974 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-23 20:32:20.427978 | orchestrator | Monday 23 February 2026 20:32:01 +0000 (0:00:02.537) 0:00:11.105 ******* 2026-02-23 20:32:20.427981 | orchestrator | 2026-02-23 20:32:20.427985 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-23 20:32:20.427992 | orchestrator | Monday 23 February 2026 20:32:01 +0000 (0:00:00.148) 0:00:11.253 ******* 2026-02-23 20:32:20.427996 | orchestrator | 2026-02-23 20:32:20.428000 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-23 20:32:20.428005 | orchestrator | Monday 23 February 2026 20:32:01 +0000 (0:00:00.073) 0:00:11.326 ******* 2026-02-23 20:32:20.428012 | orchestrator | 2026-02-23 20:32:20.428018 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-23 20:32:20.428024 | orchestrator | Monday 23 February 2026 20:32:01 +0000 (0:00:00.091) 0:00:11.418 ******* 2026-02-23 20:32:20.428030 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:32:20.428037 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:32:20.428044 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:32:20.428050 | orchestrator | 2026-02-23 20:32:20.428056 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-23 20:32:20.428062 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:09.271) 0:00:20.689 ******* 2026-02-23 20:32:20.428068 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:32:20.428147 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:32:20.428156 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:32:20.428162 | orchestrator | 2026-02-23 20:32:20.428168 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:32:20.428175 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:32:20.428181 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:32:20.428188 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:32:20.428194 | orchestrator | 2026-02-23 20:32:20.428200 | orchestrator | 2026-02-23 20:32:20.428206 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:32:20.428212 | orchestrator | Monday 23 February 2026 20:32:19 +0000 (0:00:08.207) 0:00:28.897 ******* 2026-02-23 20:32:20.428219 | orchestrator | =============================================================================== 2026-02-23 20:32:20.428225 | orchestrator | redis : Restart redis container ----------------------------------------- 9.27s 2026-02-23 20:32:20.428232 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.21s 2026-02-23 20:32:20.428238 | orchestrator | redis : Copying over redis config files --------------------------------- 2.91s 2026-02-23 20:32:20.428252 | orchestrator | redis : Check redis containers ------------------------------------------ 2.54s 2026-02-23 20:32:20.428259 | orchestrator | redis : Copying over default config.json files -------------------------- 2.37s 2026-02-23 20:32:20.428266 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.58s 2026-02-23 20:32:20.428272 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-02-23 20:32:20.428279 | orchestrator | redis : include_tasks --------------------------------------------------- 0.43s 2026-02-23 20:32:20.428285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2026-02-23 20:32:20.428291 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2026-02-23 20:32:20.428298 | orchestrator | 2026-02-23 20:32:20 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:20.428305 | orchestrator | 2026-02-23 20:32:20 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:20.428315 | orchestrator | 2026-02-23 20:32:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:23.849727 | orchestrator | 2026-02-23 20:32:23 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:23.849797 | orchestrator | 2026-02-23 20:32:23 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:23.850276 | orchestrator | 2026-02-23 20:32:23 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:23.851310 | orchestrator | 2026-02-23 20:32:23 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:23.852256 | orchestrator | 2026-02-23 20:32:23 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:23.852295 | orchestrator | 2026-02-23 20:32:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:26.924125 | orchestrator | 2026-02-23 20:32:26 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:26.924336 | orchestrator | 2026-02-23 20:32:26 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:26.926191 | orchestrator | 2026-02-23 20:32:26 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:26.926454 | orchestrator | 2026-02-23 20:32:26 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:26.927168 | orchestrator | 2026-02-23 20:32:26 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:26.927198 | orchestrator | 2026-02-23 20:32:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:30.027336 | orchestrator | 2026-02-23 20:32:30 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:30.031065 | orchestrator | 2026-02-23 20:32:30 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:30.031450 | orchestrator | 2026-02-23 20:32:30 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:30.032285 | orchestrator | 2026-02-23 20:32:30 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:30.033001 | orchestrator | 2026-02-23 20:32:30 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:30.033041 | orchestrator | 2026-02-23 20:32:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:33.158086 | orchestrator | 2026-02-23 20:32:33 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:33.158155 | orchestrator | 2026-02-23 20:32:33 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:33.158701 | orchestrator | 2026-02-23 20:32:33 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:33.159323 | orchestrator | 2026-02-23 20:32:33 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:33.162669 | orchestrator | 2026-02-23 20:32:33 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:33.162728 | orchestrator | 2026-02-23 20:32:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:36.216839 | orchestrator | 2026-02-23 20:32:36 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:36.217952 | orchestrator | 2026-02-23 20:32:36 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:36.218002 | orchestrator | 2026-02-23 20:32:36 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:36.219325 | orchestrator | 2026-02-23 20:32:36 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:36.220224 | orchestrator | 2026-02-23 20:32:36 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:36.220279 | orchestrator | 2026-02-23 20:32:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:39.266830 | orchestrator | 2026-02-23 20:32:39 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:39.267591 | orchestrator | 2026-02-23 20:32:39 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:39.268742 | orchestrator | 2026-02-23 20:32:39 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:39.270150 | orchestrator | 2026-02-23 20:32:39 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:39.271569 | orchestrator | 2026-02-23 20:32:39 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:39.273121 | orchestrator | 2026-02-23 20:32:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:42.296835 | orchestrator | 2026-02-23 20:32:42 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:42.297163 | orchestrator | 2026-02-23 20:32:42 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:42.298004 | orchestrator | 2026-02-23 20:32:42 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:42.298710 | orchestrator | 2026-02-23 20:32:42 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:42.299718 | orchestrator | 2026-02-23 20:32:42 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:42.299744 | orchestrator | 2026-02-23 20:32:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:45.324183 | orchestrator | 2026-02-23 20:32:45 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:45.325279 | orchestrator | 2026-02-23 20:32:45 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:45.326551 | orchestrator | 2026-02-23 20:32:45 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:45.327892 | orchestrator | 2026-02-23 20:32:45 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:45.329212 | orchestrator | 2026-02-23 20:32:45 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:45.329233 | orchestrator | 2026-02-23 20:32:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:48.363026 | orchestrator | 2026-02-23 20:32:48 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:48.363919 | orchestrator | 2026-02-23 20:32:48 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:48.367216 | orchestrator | 2026-02-23 20:32:48 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:48.369882 | orchestrator | 2026-02-23 20:32:48 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:48.370994 | orchestrator | 2026-02-23 20:32:48 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:48.371031 | orchestrator | 2026-02-23 20:32:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:51.401970 | orchestrator | 2026-02-23 20:32:51 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:51.402550 | orchestrator | 2026-02-23 20:32:51 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:51.403908 | orchestrator | 2026-02-23 20:32:51 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:51.405228 | orchestrator | 2026-02-23 20:32:51 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:51.406854 | orchestrator | 2026-02-23 20:32:51 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:51.406888 | orchestrator | 2026-02-23 20:32:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:54.439354 | orchestrator | 2026-02-23 20:32:54 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:54.440836 | orchestrator | 2026-02-23 20:32:54 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:54.441513 | orchestrator | 2026-02-23 20:32:54 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:54.444402 | orchestrator | 2026-02-23 20:32:54 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:54.445881 | orchestrator | 2026-02-23 20:32:54 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:54.446185 | orchestrator | 2026-02-23 20:32:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:32:57.482089 | orchestrator | 2026-02-23 20:32:57 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:32:57.482231 | orchestrator | 2026-02-23 20:32:57 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:32:57.483011 | orchestrator | 2026-02-23 20:32:57 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:32:57.484978 | orchestrator | 2026-02-23 20:32:57 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state STARTED 2026-02-23 20:32:57.486098 | orchestrator | 2026-02-23 20:32:57 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:32:57.486131 | orchestrator | 2026-02-23 20:32:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:00.519736 | orchestrator | 2026-02-23 20:33:00 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:00.519805 | orchestrator | 2026-02-23 20:33:00 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:00.521031 | orchestrator | 2026-02-23 20:33:00 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:00.522239 | orchestrator | 2026-02-23 20:33:00 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:00.524136 | orchestrator | 2026-02-23 20:33:00 | INFO  | Task 6122232b-39df-42b6-b3b6-77c6682e789b is in state SUCCESS 2026-02-23 20:33:00.526175 | orchestrator | 2026-02-23 20:33:00.526223 | orchestrator | 2026-02-23 20:33:00.526243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:33:00.526250 | orchestrator | 2026-02-23 20:33:00.526255 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:33:00.526260 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.347) 0:00:00.347 ******* 2026-02-23 20:33:00.526273 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:00.526278 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:00.526283 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:00.526288 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:00.526293 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:00.526298 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:00.526304 | orchestrator | 2026-02-23 20:33:00.526309 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:33:00.526315 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.677) 0:00:01.025 ******* 2026-02-23 20:33:00.526320 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-23 20:33:00.526356 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-23 20:33:00.526359 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-23 20:33:00.526363 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-23 20:33:00.526366 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-23 20:33:00.526369 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-23 20:33:00.526372 | orchestrator | 2026-02-23 20:33:00.526375 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-23 20:33:00.526378 | orchestrator | 2026-02-23 20:33:00.526381 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-23 20:33:00.526384 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.640) 0:00:01.665 ******* 2026-02-23 20:33:00.526388 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:33:00.526392 | orchestrator | 2026-02-23 20:33:00.526395 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-23 20:33:00.526398 | orchestrator | Monday 23 February 2026 20:31:52 +0000 (0:00:01.196) 0:00:02.861 ******* 2026-02-23 20:33:00.526401 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-23 20:33:00.526404 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-23 20:33:00.526408 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-23 20:33:00.526411 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-23 20:33:00.526414 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-23 20:33:00.526417 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-23 20:33:00.526420 | orchestrator | 2026-02-23 20:33:00.526423 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-23 20:33:00.526426 | orchestrator | Monday 23 February 2026 20:31:54 +0000 (0:00:01.394) 0:00:04.256 ******* 2026-02-23 20:33:00.526430 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-23 20:33:00.526433 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-23 20:33:00.526436 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-23 20:33:00.526439 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-23 20:33:00.526442 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-23 20:33:00.526445 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-23 20:33:00.526448 | orchestrator | 2026-02-23 20:33:00.526451 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-23 20:33:00.526491 | orchestrator | Monday 23 February 2026 20:31:55 +0000 (0:00:01.336) 0:00:05.593 ******* 2026-02-23 20:33:00.526495 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-23 20:33:00.526498 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:00.526502 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-23 20:33:00.526505 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:00.526508 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-23 20:33:00.526511 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:00.526514 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-23 20:33:00.526517 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:00.526520 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-23 20:33:00.526524 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:00.526527 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-23 20:33:00.526530 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:00.526533 | orchestrator | 2026-02-23 20:33:00.526536 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-23 20:33:00.526539 | orchestrator | Monday 23 February 2026 20:31:57 +0000 (0:00:01.717) 0:00:07.311 ******* 2026-02-23 20:33:00.526542 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:00.526545 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:00.526548 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:00.526552 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:00.526555 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:00.526561 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:00.526564 | orchestrator | 2026-02-23 20:33:00.526567 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-23 20:33:00.526570 | orchestrator | Monday 23 February 2026 20:31:58 +0000 (0:00:01.033) 0:00:08.344 ******* 2026-02-23 20:33:00.526583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526667 | orchestrator | 2026-02-23 20:33:00.526673 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-23 20:33:00.526678 | orchestrator | Monday 23 February 2026 20:32:00 +0000 (0:00:02.700) 0:00:11.045 ******* 2026-02-23 20:33:00.526684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526743 | orchestrator | 2026-02-23 20:33:00.526747 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-23 20:33:00.526750 | orchestrator | Monday 23 February 2026 20:32:05 +0000 (0:00:05.001) 0:00:16.047 ******* 2026-02-23 20:33:00.526753 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:00.526756 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:00.526759 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:00.526762 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:00.526766 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:00.526769 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:00.526772 | orchestrator | 2026-02-23 20:33:00.526775 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-23 20:33:00.526781 | orchestrator | Monday 23 February 2026 20:32:07 +0000 (0:00:01.895) 0:00:17.942 ******* 2026-02-23 20:33:00.526785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-23 20:33:00.526846 | orchestrator | 2026-02-23 20:33:00.526849 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-23 20:33:00.526853 | orchestrator | Monday 23 February 2026 20:32:11 +0000 (0:00:03.372) 0:00:21.314 ******* 2026-02-23 20:33:00.526857 | orchestrator | 2026-02-23 20:33:00.526860 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-23 20:33:00.526864 | orchestrator | Monday 23 February 2026 20:32:11 +0000 (0:00:00.617) 0:00:21.932 ******* 2026-02-23 20:33:00.526867 | orchestrator | 2026-02-23 20:33:00.526871 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-23 20:33:00.526874 | orchestrator | Monday 23 February 2026 20:32:12 +0000 (0:00:00.191) 0:00:22.123 ******* 2026-02-23 20:33:00.526878 | orchestrator | 2026-02-23 20:33:00.526881 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-23 20:33:00.526885 | orchestrator | Monday 23 February 2026 20:32:12 +0000 (0:00:00.122) 0:00:22.245 ******* 2026-02-23 20:33:00.526888 | orchestrator | 2026-02-23 20:33:00.526892 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-23 20:33:00.526895 | orchestrator | Monday 23 February 2026 20:32:12 +0000 (0:00:00.128) 0:00:22.374 ******* 2026-02-23 20:33:00.526899 | orchestrator | 2026-02-23 20:33:00.526902 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-23 20:33:00.526906 | orchestrator | Monday 23 February 2026 20:32:12 +0000 (0:00:00.119) 0:00:22.493 ******* 2026-02-23 20:33:00.526910 | orchestrator | 2026-02-23 20:33:00.526913 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-23 20:33:00.526917 | orchestrator | Monday 23 February 2026 20:32:12 +0000 (0:00:00.120) 0:00:22.614 ******* 2026-02-23 20:33:00.526920 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:00.526924 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:00.526927 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:00.526931 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:00.526935 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:00.526938 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:00.526941 | orchestrator | 2026-02-23 20:33:00.526945 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-23 20:33:00.526949 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:10.373) 0:00:32.987 ******* 2026-02-23 20:33:00.526952 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:00.526956 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:00.526960 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:00.526963 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:00.526967 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:00.526970 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:00.526974 | orchestrator | 2026-02-23 20:33:00.526977 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-23 20:33:00.526981 | orchestrator | Monday 23 February 2026 20:32:25 +0000 (0:00:02.354) 0:00:35.342 ******* 2026-02-23 20:33:00.526984 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:00.526988 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:00.526991 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:00.526995 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:00.526999 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:00.527004 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:00.527007 | orchestrator | 2026-02-23 20:33:00.527011 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-23 20:33:00.527015 | orchestrator | Monday 23 February 2026 20:32:34 +0000 (0:00:09.648) 0:00:44.991 ******* 2026-02-23 20:33:00.527018 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-23 20:33:00.527022 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-23 20:33:00.527028 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-23 20:33:00.527032 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-23 20:33:00.527036 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-23 20:33:00.527085 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-23 20:33:00.527092 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-23 20:33:00.527099 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-23 20:33:00.527107 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-23 20:33:00.527112 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-23 20:33:00.527117 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-23 20:33:00.527122 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-23 20:33:00.527128 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-23 20:33:00.527133 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-23 20:33:00.527139 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-23 20:33:00.527144 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-23 20:33:00.527149 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-23 20:33:00.527154 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-23 20:33:00.527159 | orchestrator | 2026-02-23 20:33:00.527163 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-23 20:33:00.527166 | orchestrator | Monday 23 February 2026 20:32:43 +0000 (0:00:08.982) 0:00:53.974 ******* 2026-02-23 20:33:00.527169 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-23 20:33:00.527172 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:00.527175 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-23 20:33:00.527179 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:00.527182 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-23 20:33:00.527185 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:00.527188 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-23 20:33:00.527191 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-23 20:33:00.527195 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-23 20:33:00.527198 | orchestrator | 2026-02-23 20:33:00.527201 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-23 20:33:00.527204 | orchestrator | Monday 23 February 2026 20:32:46 +0000 (0:00:02.632) 0:00:56.607 ******* 2026-02-23 20:33:00.527211 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-23 20:33:00.527214 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:00.527217 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-23 20:33:00.527221 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:00.527224 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-23 20:33:00.527227 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:00.527230 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-23 20:33:00.527233 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-23 20:33:00.527236 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-23 20:33:00.527239 | orchestrator | 2026-02-23 20:33:00.527242 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-23 20:33:00.527246 | orchestrator | Monday 23 February 2026 20:32:49 +0000 (0:00:03.446) 0:01:00.053 ******* 2026-02-23 20:33:00.527249 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:00.527252 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:00.527255 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:00.527258 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:00.527261 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:00.527264 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:00.527267 | orchestrator | 2026-02-23 20:33:00.527271 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:33:00.527274 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:33:00.527277 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:33:00.527283 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:33:00.527286 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:33:00.527289 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:33:00.527296 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:33:00.527303 | orchestrator | 2026-02-23 20:33:00.527309 | orchestrator | 2026-02-23 20:33:00.527314 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:33:00.527319 | orchestrator | Monday 23 February 2026 20:32:58 +0000 (0:00:08.437) 0:01:08.491 ******* 2026-02-23 20:33:00.527324 | orchestrator | =============================================================================== 2026-02-23 20:33:00.527329 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.09s 2026-02-23 20:33:00.527334 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.37s 2026-02-23 20:33:00.527339 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.98s 2026-02-23 20:33:00.527343 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.00s 2026-02-23 20:33:00.527348 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.45s 2026-02-23 20:33:00.527354 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.38s 2026-02-23 20:33:00.527359 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.70s 2026-02-23 20:33:00.527364 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.63s 2026-02-23 20:33:00.527369 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.36s 2026-02-23 20:33:00.527382 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.90s 2026-02-23 20:33:00.527389 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.72s 2026-02-23 20:33:00.527393 | orchestrator | module-load : Load modules ---------------------------------------------- 1.39s 2026-02-23 20:33:00.527398 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.34s 2026-02-23 20:33:00.527403 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.30s 2026-02-23 20:33:00.527408 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.20s 2026-02-23 20:33:00.527413 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.03s 2026-02-23 20:33:00.527418 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2026-02-23 20:33:00.527422 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-02-23 20:33:00.529279 | orchestrator | 2026-02-23 20:33:00 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:00.529576 | orchestrator | 2026-02-23 20:33:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:03.565355 | orchestrator | 2026-02-23 20:33:03 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:03.565805 | orchestrator | 2026-02-23 20:33:03 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:03.567554 | orchestrator | 2026-02-23 20:33:03 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:03.568319 | orchestrator | 2026-02-23 20:33:03 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:03.569091 | orchestrator | 2026-02-23 20:33:03 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:03.569119 | orchestrator | 2026-02-23 20:33:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:06.597221 | orchestrator | 2026-02-23 20:33:06 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:06.597997 | orchestrator | 2026-02-23 20:33:06 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:06.598868 | orchestrator | 2026-02-23 20:33:06 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:06.600216 | orchestrator | 2026-02-23 20:33:06 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:06.601236 | orchestrator | 2026-02-23 20:33:06 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:06.601284 | orchestrator | 2026-02-23 20:33:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:09.632431 | orchestrator | 2026-02-23 20:33:09 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:09.632645 | orchestrator | 2026-02-23 20:33:09 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:09.633528 | orchestrator | 2026-02-23 20:33:09 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:09.634324 | orchestrator | 2026-02-23 20:33:09 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:09.634909 | orchestrator | 2026-02-23 20:33:09 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:09.634945 | orchestrator | 2026-02-23 20:33:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:12.712384 | orchestrator | 2026-02-23 20:33:12 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:12.712576 | orchestrator | 2026-02-23 20:33:12 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:12.713497 | orchestrator | 2026-02-23 20:33:12 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:12.714154 | orchestrator | 2026-02-23 20:33:12 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:12.714906 | orchestrator | 2026-02-23 20:33:12 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:12.714938 | orchestrator | 2026-02-23 20:33:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:15.749267 | orchestrator | 2026-02-23 20:33:15 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:15.749856 | orchestrator | 2026-02-23 20:33:15 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:15.750604 | orchestrator | 2026-02-23 20:33:15 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:15.751727 | orchestrator | 2026-02-23 20:33:15 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:15.753960 | orchestrator | 2026-02-23 20:33:15 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:15.754420 | orchestrator | 2026-02-23 20:33:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:18.791166 | orchestrator | 2026-02-23 20:33:18 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:18.791214 | orchestrator | 2026-02-23 20:33:18 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:18.792043 | orchestrator | 2026-02-23 20:33:18 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:18.792505 | orchestrator | 2026-02-23 20:33:18 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:18.793413 | orchestrator | 2026-02-23 20:33:18 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:18.793446 | orchestrator | 2026-02-23 20:33:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:21.823968 | orchestrator | 2026-02-23 20:33:21 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:21.825792 | orchestrator | 2026-02-23 20:33:21 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:21.827555 | orchestrator | 2026-02-23 20:33:21 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:21.829462 | orchestrator | 2026-02-23 20:33:21 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:21.831098 | orchestrator | 2026-02-23 20:33:21 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:21.831189 | orchestrator | 2026-02-23 20:33:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:24.887358 | orchestrator | 2026-02-23 20:33:24 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:24.887763 | orchestrator | 2026-02-23 20:33:24 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:24.888728 | orchestrator | 2026-02-23 20:33:24 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:24.889435 | orchestrator | 2026-02-23 20:33:24 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:24.890232 | orchestrator | 2026-02-23 20:33:24 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:24.890256 | orchestrator | 2026-02-23 20:33:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:27.918714 | orchestrator | 2026-02-23 20:33:27 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:27.921225 | orchestrator | 2026-02-23 20:33:27 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:27.921908 | orchestrator | 2026-02-23 20:33:27 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:27.923474 | orchestrator | 2026-02-23 20:33:27 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:27.926164 | orchestrator | 2026-02-23 20:33:27 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:27.926203 | orchestrator | 2026-02-23 20:33:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:30.953125 | orchestrator | 2026-02-23 20:33:30 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:30.954115 | orchestrator | 2026-02-23 20:33:30 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:30.955711 | orchestrator | 2026-02-23 20:33:30 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:30.957038 | orchestrator | 2026-02-23 20:33:30 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:30.958591 | orchestrator | 2026-02-23 20:33:30 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:30.958654 | orchestrator | 2026-02-23 20:33:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:33.990261 | orchestrator | 2026-02-23 20:33:33 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:33.991881 | orchestrator | 2026-02-23 20:33:33 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:33.993466 | orchestrator | 2026-02-23 20:33:33 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:33.995051 | orchestrator | 2026-02-23 20:33:33 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:33.996206 | orchestrator | 2026-02-23 20:33:33 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:33.996405 | orchestrator | 2026-02-23 20:33:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:37.045395 | orchestrator | 2026-02-23 20:33:37 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:37.045480 | orchestrator | 2026-02-23 20:33:37 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:37.050185 | orchestrator | 2026-02-23 20:33:37 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:37.052009 | orchestrator | 2026-02-23 20:33:37 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:37.052889 | orchestrator | 2026-02-23 20:33:37 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:37.052936 | orchestrator | 2026-02-23 20:33:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:40.206717 | orchestrator | 2026-02-23 20:33:40 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:40.207510 | orchestrator | 2026-02-23 20:33:40 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:40.208189 | orchestrator | 2026-02-23 20:33:40 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:40.209297 | orchestrator | 2026-02-23 20:33:40 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:40.210785 | orchestrator | 2026-02-23 20:33:40 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:40.210811 | orchestrator | 2026-02-23 20:33:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:43.382145 | orchestrator | 2026-02-23 20:33:43 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:43.382583 | orchestrator | 2026-02-23 20:33:43 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:43.383411 | orchestrator | 2026-02-23 20:33:43 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:43.384220 | orchestrator | 2026-02-23 20:33:43 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:43.384629 | orchestrator | 2026-02-23 20:33:43 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:43.384695 | orchestrator | 2026-02-23 20:33:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:46.410764 | orchestrator | 2026-02-23 20:33:46 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:46.411709 | orchestrator | 2026-02-23 20:33:46 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:46.412303 | orchestrator | 2026-02-23 20:33:46 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:46.413179 | orchestrator | 2026-02-23 20:33:46 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:46.413847 | orchestrator | 2026-02-23 20:33:46 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:46.414004 | orchestrator | 2026-02-23 20:33:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:49.473873 | orchestrator | 2026-02-23 20:33:49 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:49.474383 | orchestrator | 2026-02-23 20:33:49 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state STARTED 2026-02-23 20:33:49.475494 | orchestrator | 2026-02-23 20:33:49 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:49.475701 | orchestrator | 2026-02-23 20:33:49 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:49.476799 | orchestrator | 2026-02-23 20:33:49 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:49.476833 | orchestrator | 2026-02-23 20:33:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:52.509793 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:52.511133 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task d6c16c83-1a54-44ce-8801-452748000c77 is in state SUCCESS 2026-02-23 20:33:52.512030 | orchestrator | 2026-02-23 20:33:52.513058 | orchestrator | 2026-02-23 20:33:52.513094 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-23 20:33:52.513102 | orchestrator | 2026-02-23 20:33:52.513107 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-23 20:33:52.513113 | orchestrator | Monday 23 February 2026 20:29:28 +0000 (0:00:00.169) 0:00:00.169 ******* 2026-02-23 20:33:52.513119 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:52.513125 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:52.513130 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:52.513137 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.513140 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.513143 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.513147 | orchestrator | 2026-02-23 20:33:52.513164 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-23 20:33:52.513170 | orchestrator | Monday 23 February 2026 20:29:29 +0000 (0:00:00.637) 0:00:00.807 ******* 2026-02-23 20:33:52.513175 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513181 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513186 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513191 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513197 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513202 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513207 | orchestrator | 2026-02-23 20:33:52.513212 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-23 20:33:52.513217 | orchestrator | Monday 23 February 2026 20:29:29 +0000 (0:00:00.598) 0:00:01.405 ******* 2026-02-23 20:33:52.513220 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513223 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513226 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513229 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513232 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513235 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513238 | orchestrator | 2026-02-23 20:33:52.513241 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-23 20:33:52.513244 | orchestrator | Monday 23 February 2026 20:29:30 +0000 (0:00:00.699) 0:00:02.105 ******* 2026-02-23 20:33:52.513247 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.513250 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.513267 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.513270 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.513273 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.513276 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.513279 | orchestrator | 2026-02-23 20:33:52.513282 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-23 20:33:52.513285 | orchestrator | Monday 23 February 2026 20:29:32 +0000 (0:00:02.021) 0:00:04.127 ******* 2026-02-23 20:33:52.513288 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.513293 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.513298 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.513303 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.513308 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.513313 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.513325 | orchestrator | 2026-02-23 20:33:52.513331 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-23 20:33:52.513336 | orchestrator | Monday 23 February 2026 20:29:33 +0000 (0:00:01.034) 0:00:05.162 ******* 2026-02-23 20:33:52.513341 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.513346 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.513351 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.513355 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.513360 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.513365 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.513370 | orchestrator | 2026-02-23 20:33:52.513375 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-23 20:33:52.513381 | orchestrator | Monday 23 February 2026 20:29:34 +0000 (0:00:00.906) 0:00:06.069 ******* 2026-02-23 20:33:52.513394 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513400 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513405 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513410 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513415 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513420 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513425 | orchestrator | 2026-02-23 20:33:52.513430 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-23 20:33:52.513434 | orchestrator | Monday 23 February 2026 20:29:35 +0000 (0:00:00.686) 0:00:06.755 ******* 2026-02-23 20:33:52.513442 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513445 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513448 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513452 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513455 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513458 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513461 | orchestrator | 2026-02-23 20:33:52.513464 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-23 20:33:52.513467 | orchestrator | Monday 23 February 2026 20:29:35 +0000 (0:00:00.534) 0:00:07.289 ******* 2026-02-23 20:33:52.513472 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:33:52.513477 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:33:52.513481 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513487 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:33:52.513490 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:33:52.513493 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513497 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:33:52.513500 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:33:52.513503 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513506 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:33:52.513518 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:33:52.513521 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513524 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:33:52.513527 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:33:52.513530 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513533 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:33:52.513536 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:33:52.513539 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513542 | orchestrator | 2026-02-23 20:33:52.513545 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-23 20:33:52.513548 | orchestrator | Monday 23 February 2026 20:29:36 +0000 (0:00:00.656) 0:00:07.945 ******* 2026-02-23 20:33:52.513551 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513554 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513557 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513560 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513564 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513567 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513570 | orchestrator | 2026-02-23 20:33:52.513573 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-23 20:33:52.513576 | orchestrator | Monday 23 February 2026 20:29:38 +0000 (0:00:01.549) 0:00:09.495 ******* 2026-02-23 20:33:52.513580 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:52.513583 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:52.513586 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:52.513589 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.513592 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.513595 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.513598 | orchestrator | 2026-02-23 20:33:52.513601 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-23 20:33:52.513604 | orchestrator | Monday 23 February 2026 20:29:39 +0000 (0:00:01.226) 0:00:10.721 ******* 2026-02-23 20:33:52.513607 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.513610 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.513629 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.513632 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.513635 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.513639 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.513643 | orchestrator | 2026-02-23 20:33:52.513647 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-23 20:33:52.513653 | orchestrator | Monday 23 February 2026 20:29:44 +0000 (0:00:05.336) 0:00:16.057 ******* 2026-02-23 20:33:52.513656 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513660 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513663 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513668 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513673 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513678 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513683 | orchestrator | 2026-02-23 20:33:52.513688 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-23 20:33:52.513693 | orchestrator | Monday 23 February 2026 20:29:45 +0000 (0:00:01.390) 0:00:17.448 ******* 2026-02-23 20:33:52.513698 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513703 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513708 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513714 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513718 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513721 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513724 | orchestrator | 2026-02-23 20:33:52.513728 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-23 20:33:52.513735 | orchestrator | Monday 23 February 2026 20:29:47 +0000 (0:00:01.689) 0:00:19.138 ******* 2026-02-23 20:33:52.513738 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513742 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513745 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513749 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513752 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513756 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513759 | orchestrator | 2026-02-23 20:33:52.513763 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-23 20:33:52.513766 | orchestrator | Monday 23 February 2026 20:29:49 +0000 (0:00:01.783) 0:00:20.921 ******* 2026-02-23 20:33:52.513770 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-23 20:33:52.513774 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-23 20:33:52.513778 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-23 20:33:52.513781 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-23 20:33:52.513785 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513788 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-23 20:33:52.513791 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-23 20:33:52.513795 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513799 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-23 20:33:52.513805 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-23 20:33:52.513813 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513819 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-23 20:33:52.513824 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-23 20:33:52.513829 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513833 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513838 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-23 20:33:52.513843 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-23 20:33:52.513848 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513852 | orchestrator | 2026-02-23 20:33:52.513857 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-23 20:33:52.513871 | orchestrator | Monday 23 February 2026 20:29:50 +0000 (0:00:01.338) 0:00:22.260 ******* 2026-02-23 20:33:52.513877 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513882 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513889 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513896 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513901 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513906 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513910 | orchestrator | 2026-02-23 20:33:52.513916 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-23 20:33:52.513921 | orchestrator | Monday 23 February 2026 20:29:52 +0000 (0:00:01.317) 0:00:23.578 ******* 2026-02-23 20:33:52.513926 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.513931 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.513936 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.513941 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.513947 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.513952 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.513958 | orchestrator | 2026-02-23 20:33:52.513964 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-23 20:33:52.513970 | orchestrator | 2026-02-23 20:33:52.513975 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-23 20:33:52.513981 | orchestrator | Monday 23 February 2026 20:29:53 +0000 (0:00:01.771) 0:00:25.349 ******* 2026-02-23 20:33:52.513986 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.513991 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.513996 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514001 | orchestrator | 2026-02-23 20:33:52.514006 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-23 20:33:52.514051 | orchestrator | Monday 23 February 2026 20:29:55 +0000 (0:00:01.876) 0:00:27.226 ******* 2026-02-23 20:33:52.514059 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514064 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514070 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514075 | orchestrator | 2026-02-23 20:33:52.514080 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-23 20:33:52.514086 | orchestrator | Monday 23 February 2026 20:29:57 +0000 (0:00:01.408) 0:00:28.635 ******* 2026-02-23 20:33:52.514091 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514097 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514102 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514107 | orchestrator | 2026-02-23 20:33:52.514113 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-23 20:33:52.514119 | orchestrator | Monday 23 February 2026 20:29:58 +0000 (0:00:01.244) 0:00:29.879 ******* 2026-02-23 20:33:52.514124 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514130 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514135 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514140 | orchestrator | 2026-02-23 20:33:52.514146 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-23 20:33:52.514151 | orchestrator | Monday 23 February 2026 20:29:59 +0000 (0:00:01.109) 0:00:30.989 ******* 2026-02-23 20:33:52.514156 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.514162 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514168 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514173 | orchestrator | 2026-02-23 20:33:52.514179 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-23 20:33:52.514184 | orchestrator | Monday 23 February 2026 20:30:00 +0000 (0:00:00.623) 0:00:31.613 ******* 2026-02-23 20:33:52.514190 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514195 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514201 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514206 | orchestrator | 2026-02-23 20:33:52.514211 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-23 20:33:52.514239 | orchestrator | Monday 23 February 2026 20:30:01 +0000 (0:00:01.091) 0:00:32.704 ******* 2026-02-23 20:33:52.514245 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514262 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514271 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514277 | orchestrator | 2026-02-23 20:33:52.514282 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-23 20:33:52.514287 | orchestrator | Monday 23 February 2026 20:30:03 +0000 (0:00:02.217) 0:00:34.922 ******* 2026-02-23 20:33:52.514292 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:33:52.514298 | orchestrator | 2026-02-23 20:33:52.514303 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-23 20:33:52.514308 | orchestrator | Monday 23 February 2026 20:30:04 +0000 (0:00:00.750) 0:00:35.672 ******* 2026-02-23 20:33:52.514314 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514319 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514324 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514329 | orchestrator | 2026-02-23 20:33:52.514334 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-23 20:33:52.514340 | orchestrator | Monday 23 February 2026 20:30:06 +0000 (0:00:02.333) 0:00:38.006 ******* 2026-02-23 20:33:52.514345 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514350 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514355 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514360 | orchestrator | 2026-02-23 20:33:52.514366 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-23 20:33:52.514377 | orchestrator | Monday 23 February 2026 20:30:07 +0000 (0:00:00.959) 0:00:38.966 ******* 2026-02-23 20:33:52.514382 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514387 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514393 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514398 | orchestrator | 2026-02-23 20:33:52.514403 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-23 20:33:52.514408 | orchestrator | Monday 23 February 2026 20:30:08 +0000 (0:00:01.274) 0:00:40.240 ******* 2026-02-23 20:33:52.514413 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514419 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514425 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514430 | orchestrator | 2026-02-23 20:33:52.514435 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-23 20:33:52.514444 | orchestrator | Monday 23 February 2026 20:30:09 +0000 (0:00:01.215) 0:00:41.456 ******* 2026-02-23 20:33:52.514450 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.514455 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514460 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514465 | orchestrator | 2026-02-23 20:33:52.514471 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-23 20:33:52.514476 | orchestrator | Monday 23 February 2026 20:30:10 +0000 (0:00:00.720) 0:00:42.176 ******* 2026-02-23 20:33:52.514481 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.514486 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514491 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514497 | orchestrator | 2026-02-23 20:33:52.514502 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-23 20:33:52.514507 | orchestrator | Monday 23 February 2026 20:30:11 +0000 (0:00:00.604) 0:00:42.781 ******* 2026-02-23 20:33:52.514512 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514518 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514523 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514528 | orchestrator | 2026-02-23 20:33:52.514534 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-23 20:33:52.514539 | orchestrator | Monday 23 February 2026 20:30:13 +0000 (0:00:02.050) 0:00:44.832 ******* 2026-02-23 20:33:52.514549 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514554 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514560 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514565 | orchestrator | 2026-02-23 20:33:52.514570 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-23 20:33:52.514575 | orchestrator | Monday 23 February 2026 20:30:15 +0000 (0:00:02.635) 0:00:47.467 ******* 2026-02-23 20:33:52.514580 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514585 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514590 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514595 | orchestrator | 2026-02-23 20:33:52.514600 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-23 20:33:52.514606 | orchestrator | Monday 23 February 2026 20:30:17 +0000 (0:00:01.166) 0:00:48.633 ******* 2026-02-23 20:33:52.514611 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-23 20:33:52.514633 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-23 20:33:52.514638 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-23 20:33:52.514643 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-23 20:33:52.514648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-23 20:33:52.514653 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-23 20:33:52.514658 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-23 20:33:52.514665 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-23 20:33:52.514670 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-23 20:33:52.514675 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-23 20:33:52.514680 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-23 20:33:52.514685 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-23 20:33:52.514690 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514693 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514696 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514701 | orchestrator | 2026-02-23 20:33:52.514706 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-23 20:33:52.514712 | orchestrator | Monday 23 February 2026 20:31:01 +0000 (0:00:44.280) 0:01:32.914 ******* 2026-02-23 20:33:52.514717 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.514721 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.514727 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.514732 | orchestrator | 2026-02-23 20:33:52.514738 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-23 20:33:52.514743 | orchestrator | Monday 23 February 2026 20:31:01 +0000 (0:00:00.273) 0:01:33.188 ******* 2026-02-23 20:33:52.514748 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514753 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514759 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514768 | orchestrator | 2026-02-23 20:33:52.514773 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-23 20:33:52.514778 | orchestrator | Monday 23 February 2026 20:31:03 +0000 (0:00:01.404) 0:01:34.592 ******* 2026-02-23 20:33:52.514784 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514789 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514794 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514799 | orchestrator | 2026-02-23 20:33:52.514809 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-23 20:33:52.514814 | orchestrator | Monday 23 February 2026 20:31:05 +0000 (0:00:01.998) 0:01:36.591 ******* 2026-02-23 20:33:52.514819 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514825 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514830 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514836 | orchestrator | 2026-02-23 20:33:52.514841 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-23 20:33:52.514846 | orchestrator | Monday 23 February 2026 20:31:30 +0000 (0:00:25.716) 0:02:02.307 ******* 2026-02-23 20:33:52.514851 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514857 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514862 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514867 | orchestrator | 2026-02-23 20:33:52.514872 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-23 20:33:52.514878 | orchestrator | Monday 23 February 2026 20:31:31 +0000 (0:00:00.615) 0:02:02.922 ******* 2026-02-23 20:33:52.514883 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514889 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514894 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514899 | orchestrator | 2026-02-23 20:33:52.514904 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-23 20:33:52.514909 | orchestrator | Monday 23 February 2026 20:31:31 +0000 (0:00:00.529) 0:02:03.452 ******* 2026-02-23 20:33:52.514915 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.514920 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.514925 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.514931 | orchestrator | 2026-02-23 20:33:52.514936 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-23 20:33:52.514942 | orchestrator | Monday 23 February 2026 20:31:32 +0000 (0:00:00.516) 0:02:03.969 ******* 2026-02-23 20:33:52.514947 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514952 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514957 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.514962 | orchestrator | 2026-02-23 20:33:52.514968 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-23 20:33:52.514973 | orchestrator | Monday 23 February 2026 20:31:33 +0000 (0:00:00.725) 0:02:04.694 ******* 2026-02-23 20:33:52.514978 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.514983 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.514989 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.515003 | orchestrator | 2026-02-23 20:33:52.515009 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-23 20:33:52.515014 | orchestrator | Monday 23 February 2026 20:31:33 +0000 (0:00:00.275) 0:02:04.970 ******* 2026-02-23 20:33:52.515019 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.515024 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.515030 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.515035 | orchestrator | 2026-02-23 20:33:52.515040 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-23 20:33:52.515046 | orchestrator | Monday 23 February 2026 20:31:33 +0000 (0:00:00.509) 0:02:05.479 ******* 2026-02-23 20:33:52.515051 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.515056 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.515061 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.515066 | orchestrator | 2026-02-23 20:33:52.515072 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-23 20:33:52.515080 | orchestrator | Monday 23 February 2026 20:31:34 +0000 (0:00:00.521) 0:02:06.001 ******* 2026-02-23 20:33:52.515086 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.515091 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.515141 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.515147 | orchestrator | 2026-02-23 20:33:52.515152 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-23 20:33:52.515158 | orchestrator | Monday 23 February 2026 20:31:35 +0000 (0:00:00.974) 0:02:06.975 ******* 2026-02-23 20:33:52.515167 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:33:52.515173 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:33:52.515178 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:33:52.515183 | orchestrator | 2026-02-23 20:33:52.515189 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-23 20:33:52.515194 | orchestrator | Monday 23 February 2026 20:31:36 +0000 (0:00:00.828) 0:02:07.804 ******* 2026-02-23 20:33:52.515200 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.515205 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.515211 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.515216 | orchestrator | 2026-02-23 20:33:52.515221 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-23 20:33:52.515226 | orchestrator | Monday 23 February 2026 20:31:36 +0000 (0:00:00.251) 0:02:08.056 ******* 2026-02-23 20:33:52.515231 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.515236 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.515241 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.515255 | orchestrator | 2026-02-23 20:33:52.515261 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-23 20:33:52.515267 | orchestrator | Monday 23 February 2026 20:31:36 +0000 (0:00:00.258) 0:02:08.314 ******* 2026-02-23 20:33:52.515272 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.515277 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.515283 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.515288 | orchestrator | 2026-02-23 20:33:52.515293 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-23 20:33:52.515298 | orchestrator | Monday 23 February 2026 20:31:37 +0000 (0:00:00.786) 0:02:09.100 ******* 2026-02-23 20:33:52.515303 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.515309 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.515314 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.515319 | orchestrator | 2026-02-23 20:33:52.515324 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-23 20:33:52.515330 | orchestrator | Monday 23 February 2026 20:31:38 +0000 (0:00:00.735) 0:02:09.836 ******* 2026-02-23 20:33:52.515336 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-23 20:33:52.515347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-23 20:33:52.515352 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-23 20:33:52.515357 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-23 20:33:52.515362 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-23 20:33:52.515368 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-23 20:33:52.515373 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-23 20:33:52.515378 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-23 20:33:52.515384 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-23 20:33:52.515389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-23 20:33:52.515403 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-23 20:33:52.515409 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-23 20:33:52.515414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-23 20:33:52.515419 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-23 20:33:52.515425 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-23 20:33:52.515430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-23 20:33:52.515435 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-23 20:33:52.515441 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-23 20:33:52.515446 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-23 20:33:52.515452 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-23 20:33:52.515458 | orchestrator | 2026-02-23 20:33:52.515464 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-23 20:33:52.515469 | orchestrator | 2026-02-23 20:33:52.515475 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-23 20:33:52.515480 | orchestrator | Monday 23 February 2026 20:31:41 +0000 (0:00:03.323) 0:02:13.160 ******* 2026-02-23 20:33:52.515485 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:52.515490 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:52.515496 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:52.515501 | orchestrator | 2026-02-23 20:33:52.515506 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-23 20:33:52.515511 | orchestrator | Monday 23 February 2026 20:31:42 +0000 (0:00:00.423) 0:02:13.583 ******* 2026-02-23 20:33:52.515516 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:52.515521 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:52.515526 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:52.515531 | orchestrator | 2026-02-23 20:33:52.515543 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-23 20:33:52.515549 | orchestrator | Monday 23 February 2026 20:31:42 +0000 (0:00:00.631) 0:02:14.215 ******* 2026-02-23 20:33:52.515553 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:52.515556 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:52.515559 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:52.515562 | orchestrator | 2026-02-23 20:33:52.515565 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-23 20:33:52.515573 | orchestrator | Monday 23 February 2026 20:31:43 +0000 (0:00:00.298) 0:02:14.513 ******* 2026-02-23 20:33:52.515579 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:33:52.515584 | orchestrator | 2026-02-23 20:33:52.515589 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-23 20:33:52.515605 | orchestrator | Monday 23 February 2026 20:31:43 +0000 (0:00:00.568) 0:02:15.081 ******* 2026-02-23 20:33:52.515611 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.515628 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.515633 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.515638 | orchestrator | 2026-02-23 20:33:52.515643 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-23 20:33:52.515648 | orchestrator | Monday 23 February 2026 20:31:43 +0000 (0:00:00.325) 0:02:15.407 ******* 2026-02-23 20:33:52.515653 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.515658 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.515663 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.515669 | orchestrator | 2026-02-23 20:33:52.515672 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-23 20:33:52.515679 | orchestrator | Monday 23 February 2026 20:31:44 +0000 (0:00:00.282) 0:02:15.689 ******* 2026-02-23 20:33:52.515683 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.515686 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.515689 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.515693 | orchestrator | 2026-02-23 20:33:52.515696 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-23 20:33:52.515699 | orchestrator | Monday 23 February 2026 20:31:44 +0000 (0:00:00.347) 0:02:16.036 ******* 2026-02-23 20:33:52.515702 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.515706 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.515709 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.515712 | orchestrator | 2026-02-23 20:33:52.515720 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-23 20:33:52.515723 | orchestrator | Monday 23 February 2026 20:31:45 +0000 (0:00:00.799) 0:02:16.836 ******* 2026-02-23 20:33:52.515727 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.515730 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.515733 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.515736 | orchestrator | 2026-02-23 20:33:52.515739 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-23 20:33:52.515743 | orchestrator | Monday 23 February 2026 20:31:46 +0000 (0:00:01.329) 0:02:18.165 ******* 2026-02-23 20:33:52.515746 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.515749 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.515752 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.515755 | orchestrator | 2026-02-23 20:33:52.515759 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-23 20:33:52.515762 | orchestrator | Monday 23 February 2026 20:31:47 +0000 (0:00:01.215) 0:02:19.381 ******* 2026-02-23 20:33:52.515765 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:33:52.515768 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:33:52.515771 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:33:52.515774 | orchestrator | 2026-02-23 20:33:52.515778 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-23 20:33:52.515781 | orchestrator | 2026-02-23 20:33:52.515784 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-23 20:33:52.515787 | orchestrator | Monday 23 February 2026 20:31:58 +0000 (0:00:10.225) 0:02:29.606 ******* 2026-02-23 20:33:52.515790 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.515794 | orchestrator | 2026-02-23 20:33:52.515797 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-23 20:33:52.515800 | orchestrator | Monday 23 February 2026 20:31:59 +0000 (0:00:01.002) 0:02:30.609 ******* 2026-02-23 20:33:52.515803 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515806 | orchestrator | 2026-02-23 20:33:52.515809 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-23 20:33:52.515812 | orchestrator | Monday 23 February 2026 20:31:59 +0000 (0:00:00.487) 0:02:31.096 ******* 2026-02-23 20:33:52.515816 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-23 20:33:52.515819 | orchestrator | 2026-02-23 20:33:52.515822 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-23 20:33:52.515825 | orchestrator | Monday 23 February 2026 20:32:00 +0000 (0:00:00.833) 0:02:31.930 ******* 2026-02-23 20:33:52.515828 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515832 | orchestrator | 2026-02-23 20:33:52.515835 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-23 20:33:52.515838 | orchestrator | Monday 23 February 2026 20:32:01 +0000 (0:00:01.223) 0:02:33.153 ******* 2026-02-23 20:33:52.515841 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515844 | orchestrator | 2026-02-23 20:33:52.515847 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-23 20:33:52.515854 | orchestrator | Monday 23 February 2026 20:32:02 +0000 (0:00:00.655) 0:02:33.809 ******* 2026-02-23 20:33:52.515857 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-23 20:33:52.515861 | orchestrator | 2026-02-23 20:33:52.515864 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-23 20:33:52.515867 | orchestrator | Monday 23 February 2026 20:32:04 +0000 (0:00:02.069) 0:02:35.879 ******* 2026-02-23 20:33:52.515870 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-23 20:33:52.515873 | orchestrator | 2026-02-23 20:33:52.515877 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-23 20:33:52.515880 | orchestrator | Monday 23 February 2026 20:32:05 +0000 (0:00:01.232) 0:02:37.111 ******* 2026-02-23 20:33:52.515883 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515886 | orchestrator | 2026-02-23 20:33:52.515892 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-23 20:33:52.515895 | orchestrator | Monday 23 February 2026 20:32:06 +0000 (0:00:00.872) 0:02:37.984 ******* 2026-02-23 20:33:52.515898 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515902 | orchestrator | 2026-02-23 20:33:52.515905 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-23 20:33:52.515908 | orchestrator | 2026-02-23 20:33:52.515911 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-23 20:33:52.515914 | orchestrator | Monday 23 February 2026 20:32:07 +0000 (0:00:00.509) 0:02:38.494 ******* 2026-02-23 20:33:52.515917 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.515920 | orchestrator | 2026-02-23 20:33:52.515924 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-23 20:33:52.515927 | orchestrator | Monday 23 February 2026 20:32:07 +0000 (0:00:00.161) 0:02:38.656 ******* 2026-02-23 20:33:52.515930 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-23 20:33:52.515933 | orchestrator | 2026-02-23 20:33:52.515936 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-23 20:33:52.515939 | orchestrator | Monday 23 February 2026 20:32:07 +0000 (0:00:00.239) 0:02:38.895 ******* 2026-02-23 20:33:52.515942 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.515945 | orchestrator | 2026-02-23 20:33:52.515948 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-23 20:33:52.515951 | orchestrator | Monday 23 February 2026 20:32:08 +0000 (0:00:01.004) 0:02:39.899 ******* 2026-02-23 20:33:52.515954 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.515958 | orchestrator | 2026-02-23 20:33:52.515961 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-23 20:33:52.515964 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:01.750) 0:02:41.650 ******* 2026-02-23 20:33:52.515967 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515970 | orchestrator | 2026-02-23 20:33:52.515973 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-23 20:33:52.515976 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:00.697) 0:02:42.348 ******* 2026-02-23 20:33:52.515979 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.515983 | orchestrator | 2026-02-23 20:33:52.515988 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-23 20:33:52.515992 | orchestrator | Monday 23 February 2026 20:32:11 +0000 (0:00:00.462) 0:02:42.811 ******* 2026-02-23 20:33:52.515995 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.515998 | orchestrator | 2026-02-23 20:33:52.516002 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-23 20:33:52.516005 | orchestrator | Monday 23 February 2026 20:32:18 +0000 (0:00:07.473) 0:02:50.284 ******* 2026-02-23 20:33:52.516008 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.516011 | orchestrator | 2026-02-23 20:33:52.516125 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-23 20:33:52.516129 | orchestrator | Monday 23 February 2026 20:32:30 +0000 (0:00:11.395) 0:03:01.679 ******* 2026-02-23 20:33:52.516136 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.516140 | orchestrator | 2026-02-23 20:33:52.516144 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-23 20:33:52.516147 | orchestrator | 2026-02-23 20:33:52.516150 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-23 20:33:52.516153 | orchestrator | Monday 23 February 2026 20:32:30 +0000 (0:00:00.489) 0:03:02.169 ******* 2026-02-23 20:33:52.516157 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.516160 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.516163 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.516166 | orchestrator | 2026-02-23 20:33:52.516171 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-23 20:33:52.516177 | orchestrator | Monday 23 February 2026 20:32:31 +0000 (0:00:00.496) 0:03:02.665 ******* 2026-02-23 20:33:52.516182 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516187 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.516192 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.516198 | orchestrator | 2026-02-23 20:33:52.516203 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-23 20:33:52.516209 | orchestrator | Monday 23 February 2026 20:32:31 +0000 (0:00:00.359) 0:03:03.025 ******* 2026-02-23 20:33:52.516214 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:33:52.516219 | orchestrator | 2026-02-23 20:33:52.516253 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-23 20:33:52.516259 | orchestrator | Monday 23 February 2026 20:32:32 +0000 (0:00:00.768) 0:03:03.793 ******* 2026-02-23 20:33:52.516263 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516269 | orchestrator | 2026-02-23 20:33:52.516274 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-23 20:33:52.516280 | orchestrator | Monday 23 February 2026 20:32:33 +0000 (0:00:01.018) 0:03:04.812 ******* 2026-02-23 20:33:52.516285 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516291 | orchestrator | 2026-02-23 20:33:52.516295 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-23 20:33:52.516300 | orchestrator | Monday 23 February 2026 20:32:34 +0000 (0:00:00.777) 0:03:05.589 ******* 2026-02-23 20:33:52.516318 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516323 | orchestrator | 2026-02-23 20:33:52.516328 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-23 20:33:52.516333 | orchestrator | Monday 23 February 2026 20:32:34 +0000 (0:00:00.141) 0:03:05.731 ******* 2026-02-23 20:33:52.516338 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516343 | orchestrator | 2026-02-23 20:33:52.516348 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-23 20:33:52.516353 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:01.088) 0:03:06.819 ******* 2026-02-23 20:33:52.516359 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516364 | orchestrator | 2026-02-23 20:33:52.516373 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-23 20:33:52.516378 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:00.125) 0:03:06.945 ******* 2026-02-23 20:33:52.516383 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516388 | orchestrator | 2026-02-23 20:33:52.516393 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-23 20:33:52.516399 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:00.104) 0:03:07.050 ******* 2026-02-23 20:33:52.516404 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516409 | orchestrator | 2026-02-23 20:33:52.516414 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-23 20:33:52.516420 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:00.102) 0:03:07.152 ******* 2026-02-23 20:33:52.516425 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516430 | orchestrator | 2026-02-23 20:33:52.516440 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-23 20:33:52.516445 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:00.115) 0:03:07.268 ******* 2026-02-23 20:33:52.516451 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516456 | orchestrator | 2026-02-23 20:33:52.516461 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-23 20:33:52.516466 | orchestrator | Monday 23 February 2026 20:32:41 +0000 (0:00:05.652) 0:03:12.921 ******* 2026-02-23 20:33:52.516472 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-23 20:33:52.516477 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-23 20:33:52.516482 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-23 20:33:52.516488 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-23 20:33:52.516493 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-23 20:33:52.516498 | orchestrator | 2026-02-23 20:33:52.516504 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-23 20:33:52.516509 | orchestrator | Monday 23 February 2026 20:33:24 +0000 (0:00:42.860) 0:03:55.781 ******* 2026-02-23 20:33:52.516521 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516526 | orchestrator | 2026-02-23 20:33:52.516531 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-23 20:33:52.516536 | orchestrator | Monday 23 February 2026 20:33:25 +0000 (0:00:01.081) 0:03:56.863 ******* 2026-02-23 20:33:52.516542 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516547 | orchestrator | 2026-02-23 20:33:52.516552 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-23 20:33:52.516558 | orchestrator | Monday 23 February 2026 20:33:26 +0000 (0:00:01.509) 0:03:58.373 ******* 2026-02-23 20:33:52.516563 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-23 20:33:52.516568 | orchestrator | 2026-02-23 20:33:52.516574 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-23 20:33:52.516579 | orchestrator | Monday 23 February 2026 20:33:27 +0000 (0:00:01.064) 0:03:59.437 ******* 2026-02-23 20:33:52.516585 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516590 | orchestrator | 2026-02-23 20:33:52.516595 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-23 20:33:52.516787 | orchestrator | Monday 23 February 2026 20:33:28 +0000 (0:00:00.099) 0:03:59.537 ******* 2026-02-23 20:33:52.516801 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-23 20:33:52.516807 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-23 20:33:52.516813 | orchestrator | 2026-02-23 20:33:52.516818 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-23 20:33:52.516824 | orchestrator | Monday 23 February 2026 20:33:29 +0000 (0:00:01.704) 0:04:01.241 ******* 2026-02-23 20:33:52.516829 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.516834 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.516839 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.516845 | orchestrator | 2026-02-23 20:33:52.516850 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-23 20:33:52.516855 | orchestrator | Monday 23 February 2026 20:33:30 +0000 (0:00:00.349) 0:04:01.591 ******* 2026-02-23 20:33:52.516860 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.516866 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.516871 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.516876 | orchestrator | 2026-02-23 20:33:52.516882 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-23 20:33:52.516887 | orchestrator | 2026-02-23 20:33:52.516892 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-23 20:33:52.516904 | orchestrator | Monday 23 February 2026 20:33:31 +0000 (0:00:00.966) 0:04:02.557 ******* 2026-02-23 20:33:52.516909 | orchestrator | ok: [testbed-manager] 2026-02-23 20:33:52.516914 | orchestrator | 2026-02-23 20:33:52.516920 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-23 20:33:52.516925 | orchestrator | Monday 23 February 2026 20:33:31 +0000 (0:00:00.124) 0:04:02.681 ******* 2026-02-23 20:33:52.516930 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-23 20:33:52.516936 | orchestrator | 2026-02-23 20:33:52.516941 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-23 20:33:52.516964 | orchestrator | Monday 23 February 2026 20:33:31 +0000 (0:00:00.207) 0:04:02.889 ******* 2026-02-23 20:33:52.516982 | orchestrator | changed: [testbed-manager] 2026-02-23 20:33:52.516988 | orchestrator | 2026-02-23 20:33:52.516994 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-23 20:33:52.517000 | orchestrator | 2026-02-23 20:33:52.517005 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-23 20:33:52.517011 | orchestrator | Monday 23 February 2026 20:33:36 +0000 (0:00:05.592) 0:04:08.482 ******* 2026-02-23 20:33:52.517047 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:33:52.517054 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:33:52.517059 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:33:52.517064 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:33:52.517069 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:33:52.517074 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:33:52.517079 | orchestrator | 2026-02-23 20:33:52.517085 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-23 20:33:52.517090 | orchestrator | Monday 23 February 2026 20:33:37 +0000 (0:00:00.628) 0:04:09.111 ******* 2026-02-23 20:33:52.517102 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-23 20:33:52.517105 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-23 20:33:52.517108 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-23 20:33:52.517112 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-23 20:33:52.517115 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-23 20:33:52.517118 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-23 20:33:52.517121 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-23 20:33:52.517124 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-23 20:33:52.517127 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-23 20:33:52.517131 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-23 20:33:52.517134 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-23 20:33:52.517137 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-23 20:33:52.517146 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-23 20:33:52.517152 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-23 20:33:52.517157 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-23 20:33:52.517161 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-23 20:33:52.517166 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-23 20:33:52.517171 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-23 20:33:52.517185 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-23 20:33:52.517196 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-23 20:33:52.517201 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-23 20:33:52.517206 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-23 20:33:52.517212 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-23 20:33:52.517217 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-23 20:33:52.517223 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-23 20:33:52.517228 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-23 20:33:52.517233 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-23 20:33:52.517238 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-23 20:33:52.517243 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-23 20:33:52.517248 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-23 20:33:52.517254 | orchestrator | 2026-02-23 20:33:52.517257 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-23 20:33:52.517260 | orchestrator | Monday 23 February 2026 20:33:49 +0000 (0:00:12.232) 0:04:21.343 ******* 2026-02-23 20:33:52.517263 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.517267 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.517273 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.517277 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.517282 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.517286 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.517292 | orchestrator | 2026-02-23 20:33:52.517296 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-23 20:33:52.517299 | orchestrator | Monday 23 February 2026 20:33:50 +0000 (0:00:00.697) 0:04:22.040 ******* 2026-02-23 20:33:52.517302 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:33:52.517305 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:33:52.517308 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:33:52.517311 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:33:52.517314 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:33:52.517317 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:33:52.517320 | orchestrator | 2026-02-23 20:33:52.517323 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:33:52.517327 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:33:52.517334 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-23 20:33:52.517338 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-23 20:33:52.517341 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-23 20:33:52.517344 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-23 20:33:52.517348 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-23 20:33:52.517351 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-23 20:33:52.517356 | orchestrator | 2026-02-23 20:33:52.517360 | orchestrator | 2026-02-23 20:33:52.517363 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:33:52.517366 | orchestrator | Monday 23 February 2026 20:33:51 +0000 (0:00:00.467) 0:04:22.508 ******* 2026-02-23 20:33:52.517369 | orchestrator | =============================================================================== 2026-02-23 20:33:52.517372 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.28s 2026-02-23 20:33:52.517375 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.86s 2026-02-23 20:33:52.517378 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.72s 2026-02-23 20:33:52.517385 | orchestrator | Manage labels ---------------------------------------------------------- 12.23s 2026-02-23 20:33:52.517388 | orchestrator | kubectl : Install required packages ------------------------------------ 11.40s 2026-02-23 20:33:52.517426 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.23s 2026-02-23 20:33:52.517429 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.47s 2026-02-23 20:33:52.517433 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.65s 2026-02-23 20:33:52.517436 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.59s 2026-02-23 20:33:52.517439 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.34s 2026-02-23 20:33:52.517442 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.32s 2026-02-23 20:33:52.517445 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.64s 2026-02-23 20:33:52.517448 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.33s 2026-02-23 20:33:52.517451 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.22s 2026-02-23 20:33:52.517454 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.07s 2026-02-23 20:33:52.517457 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.05s 2026-02-23 20:33:52.517460 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.02s 2026-02-23 20:33:52.517464 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 2.00s 2026-02-23 20:33:52.517467 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.88s 2026-02-23 20:33:52.517470 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.78s 2026-02-23 20:33:52.517591 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:52.517598 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:52.517601 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task 4e030736-d068-446e-aba0-531be445d649 is in state STARTED 2026-02-23 20:33:52.517604 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task 2434cb36-7ca2-45b4-ba13-50fec8473674 is in state STARTED 2026-02-23 20:33:52.517607 | orchestrator | 2026-02-23 20:33:52 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:52.517610 | orchestrator | 2026-02-23 20:33:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:55.555432 | orchestrator | 2026-02-23 20:33:55 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:55.556316 | orchestrator | 2026-02-23 20:33:55 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:55.558738 | orchestrator | 2026-02-23 20:33:55 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:55.561034 | orchestrator | 2026-02-23 20:33:55 | INFO  | Task 4e030736-d068-446e-aba0-531be445d649 is in state STARTED 2026-02-23 20:33:55.563399 | orchestrator | 2026-02-23 20:33:55 | INFO  | Task 2434cb36-7ca2-45b4-ba13-50fec8473674 is in state STARTED 2026-02-23 20:33:55.565526 | orchestrator | 2026-02-23 20:33:55 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:55.565573 | orchestrator | 2026-02-23 20:33:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:33:58.608308 | orchestrator | 2026-02-23 20:33:58 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:33:58.608807 | orchestrator | 2026-02-23 20:33:58 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:33:58.610685 | orchestrator | 2026-02-23 20:33:58 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:33:58.611548 | orchestrator | 2026-02-23 20:33:58 | INFO  | Task 4e030736-d068-446e-aba0-531be445d649 is in state STARTED 2026-02-23 20:33:58.611592 | orchestrator | 2026-02-23 20:33:58 | INFO  | Task 2434cb36-7ca2-45b4-ba13-50fec8473674 is in state SUCCESS 2026-02-23 20:33:58.612369 | orchestrator | 2026-02-23 20:33:58 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:33:58.612395 | orchestrator | 2026-02-23 20:33:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:01.634563 | orchestrator | 2026-02-23 20:34:01 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:01.634698 | orchestrator | 2026-02-23 20:34:01 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:01.635382 | orchestrator | 2026-02-23 20:34:01 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:01.636043 | orchestrator | 2026-02-23 20:34:01 | INFO  | Task 4e030736-d068-446e-aba0-531be445d649 is in state STARTED 2026-02-23 20:34:01.636445 | orchestrator | 2026-02-23 20:34:01 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:01.636633 | orchestrator | 2026-02-23 20:34:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:04.656219 | orchestrator | 2026-02-23 20:34:04 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:04.656840 | orchestrator | 2026-02-23 20:34:04 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:04.657730 | orchestrator | 2026-02-23 20:34:04 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:04.658599 | orchestrator | 2026-02-23 20:34:04 | INFO  | Task 4e030736-d068-446e-aba0-531be445d649 is in state SUCCESS 2026-02-23 20:34:04.659395 | orchestrator | 2026-02-23 20:34:04 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:04.659437 | orchestrator | 2026-02-23 20:34:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:07.700041 | orchestrator | 2026-02-23 20:34:07 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:07.702337 | orchestrator | 2026-02-23 20:34:07 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:07.704467 | orchestrator | 2026-02-23 20:34:07 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:07.706503 | orchestrator | 2026-02-23 20:34:07 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:07.706563 | orchestrator | 2026-02-23 20:34:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:10.730573 | orchestrator | 2026-02-23 20:34:10 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:10.731159 | orchestrator | 2026-02-23 20:34:10 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:10.732441 | orchestrator | 2026-02-23 20:34:10 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:10.732976 | orchestrator | 2026-02-23 20:34:10 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:10.733025 | orchestrator | 2026-02-23 20:34:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:13.765531 | orchestrator | 2026-02-23 20:34:13 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:13.766196 | orchestrator | 2026-02-23 20:34:13 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:13.766900 | orchestrator | 2026-02-23 20:34:13 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:13.767627 | orchestrator | 2026-02-23 20:34:13 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:13.767859 | orchestrator | 2026-02-23 20:34:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:16.797652 | orchestrator | 2026-02-23 20:34:16 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:16.799934 | orchestrator | 2026-02-23 20:34:16 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:16.805038 | orchestrator | 2026-02-23 20:34:16 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:16.805093 | orchestrator | 2026-02-23 20:34:16 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:16.805119 | orchestrator | 2026-02-23 20:34:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:19.829898 | orchestrator | 2026-02-23 20:34:19 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:19.833199 | orchestrator | 2026-02-23 20:34:19 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:19.835099 | orchestrator | 2026-02-23 20:34:19 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:19.837672 | orchestrator | 2026-02-23 20:34:19 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:19.837713 | orchestrator | 2026-02-23 20:34:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:22.864547 | orchestrator | 2026-02-23 20:34:22 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state STARTED 2026-02-23 20:34:22.865151 | orchestrator | 2026-02-23 20:34:22 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:22.865717 | orchestrator | 2026-02-23 20:34:22 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:22.866477 | orchestrator | 2026-02-23 20:34:22 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:22.866504 | orchestrator | 2026-02-23 20:34:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:25.907382 | orchestrator | 2026-02-23 20:34:25.907457 | orchestrator | 2026-02-23 20:34:25.907465 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-23 20:34:25.907471 | orchestrator | 2026-02-23 20:34:25.907477 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-23 20:34:25.907483 | orchestrator | Monday 23 February 2026 20:33:55 +0000 (0:00:00.172) 0:00:00.172 ******* 2026-02-23 20:34:25.907489 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-23 20:34:25.907494 | orchestrator | 2026-02-23 20:34:25.907520 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-23 20:34:25.907527 | orchestrator | Monday 23 February 2026 20:33:56 +0000 (0:00:00.736) 0:00:00.909 ******* 2026-02-23 20:34:25.907532 | orchestrator | changed: [testbed-manager] 2026-02-23 20:34:25.907548 | orchestrator | 2026-02-23 20:34:25.907559 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-23 20:34:25.907564 | orchestrator | Monday 23 February 2026 20:33:57 +0000 (0:00:01.031) 0:00:01.940 ******* 2026-02-23 20:34:25.907570 | orchestrator | changed: [testbed-manager] 2026-02-23 20:34:25.907575 | orchestrator | 2026-02-23 20:34:25.907580 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:34:25.907586 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:34:25.907622 | orchestrator | 2026-02-23 20:34:25.907627 | orchestrator | 2026-02-23 20:34:25.907633 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:34:25.907638 | orchestrator | Monday 23 February 2026 20:33:58 +0000 (0:00:00.450) 0:00:02.390 ******* 2026-02-23 20:34:25.907643 | orchestrator | =============================================================================== 2026-02-23 20:34:25.907649 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.03s 2026-02-23 20:34:25.907654 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-02-23 20:34:25.907659 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.45s 2026-02-23 20:34:25.907664 | orchestrator | 2026-02-23 20:34:25.907669 | orchestrator | 2026-02-23 20:34:25.907674 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-23 20:34:25.907679 | orchestrator | 2026-02-23 20:34:25.907684 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-23 20:34:25.907689 | orchestrator | Monday 23 February 2026 20:33:55 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-02-23 20:34:25.907694 | orchestrator | ok: [testbed-manager] 2026-02-23 20:34:25.907700 | orchestrator | 2026-02-23 20:34:25.907706 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-23 20:34:25.907711 | orchestrator | Monday 23 February 2026 20:33:55 +0000 (0:00:00.511) 0:00:00.700 ******* 2026-02-23 20:34:25.907716 | orchestrator | ok: [testbed-manager] 2026-02-23 20:34:25.907721 | orchestrator | 2026-02-23 20:34:25.907727 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-23 20:34:25.907732 | orchestrator | Monday 23 February 2026 20:33:56 +0000 (0:00:00.555) 0:00:01.256 ******* 2026-02-23 20:34:25.907737 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-23 20:34:25.907742 | orchestrator | 2026-02-23 20:34:25.907747 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-23 20:34:25.907752 | orchestrator | Monday 23 February 2026 20:33:57 +0000 (0:00:00.782) 0:00:02.039 ******* 2026-02-23 20:34:25.907757 | orchestrator | changed: [testbed-manager] 2026-02-23 20:34:25.907762 | orchestrator | 2026-02-23 20:34:25.907767 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-23 20:34:25.907772 | orchestrator | Monday 23 February 2026 20:33:59 +0000 (0:00:01.743) 0:00:03.782 ******* 2026-02-23 20:34:25.907843 | orchestrator | changed: [testbed-manager] 2026-02-23 20:34:25.907851 | orchestrator | 2026-02-23 20:34:25.907857 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-23 20:34:25.907862 | orchestrator | Monday 23 February 2026 20:33:59 +0000 (0:00:00.469) 0:00:04.252 ******* 2026-02-23 20:34:25.907867 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-23 20:34:25.907872 | orchestrator | 2026-02-23 20:34:25.907877 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-23 20:34:25.907882 | orchestrator | Monday 23 February 2026 20:34:01 +0000 (0:00:01.562) 0:00:05.815 ******* 2026-02-23 20:34:25.907888 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-23 20:34:25.907892 | orchestrator | 2026-02-23 20:34:25.907905 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-23 20:34:25.907911 | orchestrator | Monday 23 February 2026 20:34:01 +0000 (0:00:00.779) 0:00:06.595 ******* 2026-02-23 20:34:25.907916 | orchestrator | ok: [testbed-manager] 2026-02-23 20:34:25.907921 | orchestrator | 2026-02-23 20:34:25.907926 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-23 20:34:25.907930 | orchestrator | Monday 23 February 2026 20:34:02 +0000 (0:00:00.354) 0:00:06.950 ******* 2026-02-23 20:34:25.907935 | orchestrator | ok: [testbed-manager] 2026-02-23 20:34:25.907941 | orchestrator | 2026-02-23 20:34:25.907945 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:34:25.907951 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:34:25.907957 | orchestrator | 2026-02-23 20:34:25.907962 | orchestrator | 2026-02-23 20:34:25.907999 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:34:25.908007 | orchestrator | Monday 23 February 2026 20:34:02 +0000 (0:00:00.290) 0:00:07.240 ******* 2026-02-23 20:34:25.908013 | orchestrator | =============================================================================== 2026-02-23 20:34:25.908018 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.74s 2026-02-23 20:34:25.908023 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.56s 2026-02-23 20:34:25.908028 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2026-02-23 20:34:25.908049 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-02-23 20:34:25.908056 | orchestrator | Create .kube directory -------------------------------------------------- 0.56s 2026-02-23 20:34:25.908061 | orchestrator | Get home directory of operator user ------------------------------------- 0.51s 2026-02-23 20:34:25.908067 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.47s 2026-02-23 20:34:25.908072 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2026-02-23 20:34:25.908077 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-02-23 20:34:25.908083 | orchestrator | 2026-02-23 20:34:25.908088 | orchestrator | 2026-02-23 20:34:25.908093 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-23 20:34:25.908098 | orchestrator | 2026-02-23 20:34:25.908103 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-23 20:34:25.908108 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:00.110) 0:00:00.110 ******* 2026-02-23 20:34:25.908113 | orchestrator | ok: [localhost] => { 2026-02-23 20:34:25.908119 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-23 20:34:25.908125 | orchestrator | } 2026-02-23 20:34:25.908131 | orchestrator | 2026-02-23 20:34:25.908136 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-23 20:34:25.908141 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:00.080) 0:00:00.190 ******* 2026-02-23 20:34:25.908148 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-23 20:34:25.908154 | orchestrator | ...ignoring 2026-02-23 20:34:25.908159 | orchestrator | 2026-02-23 20:34:25.908164 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-23 20:34:25.908169 | orchestrator | Monday 23 February 2026 20:32:14 +0000 (0:00:03.357) 0:00:03.548 ******* 2026-02-23 20:34:25.908174 | orchestrator | skipping: [localhost] 2026-02-23 20:34:25.908180 | orchestrator | 2026-02-23 20:34:25.908185 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-23 20:34:25.908190 | orchestrator | Monday 23 February 2026 20:32:14 +0000 (0:00:00.120) 0:00:03.668 ******* 2026-02-23 20:34:25.908196 | orchestrator | ok: [localhost] 2026-02-23 20:34:25.908201 | orchestrator | 2026-02-23 20:34:25.908211 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:34:25.908216 | orchestrator | 2026-02-23 20:34:25.908221 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:34:25.908227 | orchestrator | Monday 23 February 2026 20:32:14 +0000 (0:00:00.255) 0:00:03.923 ******* 2026-02-23 20:34:25.908231 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:34:25.908237 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:34:25.908242 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:34:25.908247 | orchestrator | 2026-02-23 20:34:25.908252 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:34:25.908257 | orchestrator | Monday 23 February 2026 20:32:15 +0000 (0:00:00.891) 0:00:04.815 ******* 2026-02-23 20:34:25.908263 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-23 20:34:25.908269 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-23 20:34:25.908274 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-23 20:34:25.908279 | orchestrator | 2026-02-23 20:34:25.908284 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-23 20:34:25.908289 | orchestrator | 2026-02-23 20:34:25.908294 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-23 20:34:25.908299 | orchestrator | Monday 23 February 2026 20:32:16 +0000 (0:00:00.966) 0:00:05.781 ******* 2026-02-23 20:34:25.908305 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:34:25.908310 | orchestrator | 2026-02-23 20:34:25.908315 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-23 20:34:25.908321 | orchestrator | Monday 23 February 2026 20:32:17 +0000 (0:00:00.675) 0:00:06.457 ******* 2026-02-23 20:34:25.908326 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:34:25.908331 | orchestrator | 2026-02-23 20:34:25.908336 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-23 20:34:25.908341 | orchestrator | Monday 23 February 2026 20:32:18 +0000 (0:00:01.147) 0:00:07.604 ******* 2026-02-23 20:34:25.908346 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908351 | orchestrator | 2026-02-23 20:34:25.908357 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-23 20:34:25.908362 | orchestrator | Monday 23 February 2026 20:32:18 +0000 (0:00:00.414) 0:00:08.018 ******* 2026-02-23 20:34:25.908366 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908372 | orchestrator | 2026-02-23 20:34:25.908377 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-23 20:34:25.908383 | orchestrator | Monday 23 February 2026 20:32:19 +0000 (0:00:00.426) 0:00:08.445 ******* 2026-02-23 20:34:25.908388 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908393 | orchestrator | 2026-02-23 20:34:25.908398 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-23 20:34:25.908403 | orchestrator | Monday 23 February 2026 20:32:20 +0000 (0:00:00.823) 0:00:09.268 ******* 2026-02-23 20:34:25.908413 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908419 | orchestrator | 2026-02-23 20:34:25.908424 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-23 20:34:25.908429 | orchestrator | Monday 23 February 2026 20:32:20 +0000 (0:00:00.663) 0:00:09.932 ******* 2026-02-23 20:34:25.908434 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:34:25.908439 | orchestrator | 2026-02-23 20:34:25.908444 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-23 20:34:25.908454 | orchestrator | Monday 23 February 2026 20:32:21 +0000 (0:00:00.681) 0:00:10.613 ******* 2026-02-23 20:34:25.908460 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:34:25.908466 | orchestrator | 2026-02-23 20:34:25.908471 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-23 20:34:25.908476 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:00.784) 0:00:11.398 ******* 2026-02-23 20:34:25.908486 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908491 | orchestrator | 2026-02-23 20:34:25.908497 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-23 20:34:25.908502 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:00.341) 0:00:11.739 ******* 2026-02-23 20:34:25.908507 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908513 | orchestrator | 2026-02-23 20:34:25.908518 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-23 20:34:25.908523 | orchestrator | Monday 23 February 2026 20:32:23 +0000 (0:00:00.531) 0:00:12.270 ******* 2026-02-23 20:34:25.908533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908559 | orchestrator | 2026-02-23 20:34:25.908564 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-23 20:34:25.908574 | orchestrator | Monday 23 February 2026 20:32:24 +0000 (0:00:01.406) 0:00:13.677 ******* 2026-02-23 20:34:25.908585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908676 | orchestrator | 2026-02-23 20:34:25.908682 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-23 20:34:25.908687 | orchestrator | Monday 23 February 2026 20:32:27 +0000 (0:00:02.968) 0:00:16.646 ******* 2026-02-23 20:34:25.908692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-23 20:34:25.908698 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-23 20:34:25.908703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-23 20:34:25.908708 | orchestrator | 2026-02-23 20:34:25.908714 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-23 20:34:25.908724 | orchestrator | Monday 23 February 2026 20:32:30 +0000 (0:00:02.893) 0:00:19.539 ******* 2026-02-23 20:34:25.908733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-23 20:34:25.908739 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-23 20:34:25.908744 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-23 20:34:25.908749 | orchestrator | 2026-02-23 20:34:25.908755 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-23 20:34:25.908765 | orchestrator | Monday 23 February 2026 20:32:32 +0000 (0:00:02.202) 0:00:21.742 ******* 2026-02-23 20:34:25.908770 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-23 20:34:25.908776 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-23 20:34:25.908781 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-23 20:34:25.908786 | orchestrator | 2026-02-23 20:34:25.908792 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-23 20:34:25.908797 | orchestrator | Monday 23 February 2026 20:32:34 +0000 (0:00:01.969) 0:00:23.712 ******* 2026-02-23 20:34:25.908802 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-23 20:34:25.908808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-23 20:34:25.908813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-23 20:34:25.908817 | orchestrator | 2026-02-23 20:34:25.908823 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-23 20:34:25.908828 | orchestrator | Monday 23 February 2026 20:32:37 +0000 (0:00:03.136) 0:00:26.848 ******* 2026-02-23 20:34:25.908833 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-23 20:34:25.908838 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-23 20:34:25.908843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-23 20:34:25.908848 | orchestrator | 2026-02-23 20:34:25.908853 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-23 20:34:25.908858 | orchestrator | Monday 23 February 2026 20:32:39 +0000 (0:00:01.565) 0:00:28.414 ******* 2026-02-23 20:34:25.908863 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-23 20:34:25.908869 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-23 20:34:25.908874 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-23 20:34:25.908879 | orchestrator | 2026-02-23 20:34:25.908884 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-23 20:34:25.908889 | orchestrator | Monday 23 February 2026 20:32:40 +0000 (0:00:01.655) 0:00:30.069 ******* 2026-02-23 20:34:25.908894 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.908899 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:34:25.908904 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:34:25.908909 | orchestrator | 2026-02-23 20:34:25.908914 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-23 20:34:25.908919 | orchestrator | Monday 23 February 2026 20:32:41 +0000 (0:00:00.421) 0:00:30.490 ******* 2026-02-23 20:34:25.908925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:34:25.908954 | orchestrator | 2026-02-23 20:34:25.908959 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-23 20:34:25.908965 | orchestrator | Monday 23 February 2026 20:32:42 +0000 (0:00:01.388) 0:00:31.879 ******* 2026-02-23 20:34:25.908970 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:34:25.908975 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:34:25.908980 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:34:25.908985 | orchestrator | 2026-02-23 20:34:25.908990 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-23 20:34:25.908995 | orchestrator | Monday 23 February 2026 20:32:43 +0000 (0:00:01.016) 0:00:32.896 ******* 2026-02-23 20:34:25.909000 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:34:25.909004 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:34:25.909009 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:34:25.909015 | orchestrator | 2026-02-23 20:34:25.909020 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-23 20:34:25.909025 | orchestrator | Monday 23 February 2026 20:32:51 +0000 (0:00:07.665) 0:00:40.562 ******* 2026-02-23 20:34:25.909030 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:34:25.909039 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:34:25.909045 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:34:25.909051 | orchestrator | 2026-02-23 20:34:25.909055 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-23 20:34:25.909060 | orchestrator | 2026-02-23 20:34:25.909066 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-23 20:34:25.909071 | orchestrator | Monday 23 February 2026 20:32:52 +0000 (0:00:01.252) 0:00:41.815 ******* 2026-02-23 20:34:25.909076 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:34:25.909081 | orchestrator | 2026-02-23 20:34:25.909086 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-23 20:34:25.909091 | orchestrator | Monday 23 February 2026 20:32:53 +0000 (0:00:00.684) 0:00:42.500 ******* 2026-02-23 20:34:25.909097 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:34:25.909102 | orchestrator | 2026-02-23 20:34:25.909106 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-23 20:34:25.909112 | orchestrator | Monday 23 February 2026 20:32:53 +0000 (0:00:00.198) 0:00:42.698 ******* 2026-02-23 20:34:25.909117 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:34:25.909122 | orchestrator | 2026-02-23 20:34:25.909127 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-23 20:34:25.909132 | orchestrator | Monday 23 February 2026 20:32:55 +0000 (0:00:01.783) 0:00:44.481 ******* 2026-02-23 20:34:25.909137 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:34:25.909142 | orchestrator | 2026-02-23 20:34:25.909147 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-23 20:34:25.909152 | orchestrator | 2026-02-23 20:34:25.909157 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-23 20:34:25.909162 | orchestrator | Monday 23 February 2026 20:33:47 +0000 (0:00:52.485) 0:01:36.967 ******* 2026-02-23 20:34:25.909167 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:34:25.909172 | orchestrator | 2026-02-23 20:34:25.909177 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-23 20:34:25.909182 | orchestrator | Monday 23 February 2026 20:33:48 +0000 (0:00:00.645) 0:01:37.612 ******* 2026-02-23 20:34:25.909187 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:34:25.909192 | orchestrator | 2026-02-23 20:34:25.909197 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-23 20:34:25.909202 | orchestrator | Monday 23 February 2026 20:33:48 +0000 (0:00:00.292) 0:01:37.905 ******* 2026-02-23 20:34:25.909207 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:34:25.909213 | orchestrator | 2026-02-23 20:34:25.909218 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-23 20:34:25.909226 | orchestrator | Monday 23 February 2026 20:33:50 +0000 (0:00:02.227) 0:01:40.133 ******* 2026-02-23 20:34:25.909231 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:34:25.909236 | orchestrator | 2026-02-23 20:34:25.909241 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-23 20:34:25.909246 | orchestrator | 2026-02-23 20:34:25.909251 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-23 20:34:25.909256 | orchestrator | Monday 23 February 2026 20:34:05 +0000 (0:00:14.354) 0:01:54.488 ******* 2026-02-23 20:34:25.909261 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:34:25.909266 | orchestrator | 2026-02-23 20:34:25.909274 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-23 20:34:25.909279 | orchestrator | Monday 23 February 2026 20:34:05 +0000 (0:00:00.650) 0:01:55.138 ******* 2026-02-23 20:34:25.909285 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:34:25.909290 | orchestrator | 2026-02-23 20:34:25.909295 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-23 20:34:25.909301 | orchestrator | Monday 23 February 2026 20:34:06 +0000 (0:00:00.206) 0:01:55.344 ******* 2026-02-23 20:34:25.909306 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:34:25.909311 | orchestrator | 2026-02-23 20:34:25.909316 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-23 20:34:25.909329 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:01.702) 0:01:57.047 ******* 2026-02-23 20:34:25.909334 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:34:25.909339 | orchestrator | 2026-02-23 20:34:25.909344 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-23 20:34:25.909349 | orchestrator | 2026-02-23 20:34:25.909355 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-23 20:34:25.909360 | orchestrator | Monday 23 February 2026 20:34:21 +0000 (0:00:13.479) 0:02:10.527 ******* 2026-02-23 20:34:25.909365 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:34:25.909370 | orchestrator | 2026-02-23 20:34:25.909375 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-23 20:34:25.909380 | orchestrator | Monday 23 February 2026 20:34:21 +0000 (0:00:00.553) 0:02:11.081 ******* 2026-02-23 20:34:25.909386 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:34:25.909391 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:34:25.909396 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:34:25.909401 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-23 20:34:25.909406 | orchestrator | enable_outward_rabbitmq_True 2026-02-23 20:34:25.909411 | orchestrator | 2026-02-23 20:34:25.909416 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-23 20:34:25.909421 | orchestrator | skipping: no hosts matched 2026-02-23 20:34:25.909427 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-23 20:34:25.909432 | orchestrator | outward_rabbitmq_restart 2026-02-23 20:34:25.909437 | orchestrator | 2026-02-23 20:34:25.909442 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-23 20:34:25.909447 | orchestrator | skipping: no hosts matched 2026-02-23 20:34:25.909452 | orchestrator | 2026-02-23 20:34:25.909457 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-23 20:34:25.909462 | orchestrator | skipping: no hosts matched 2026-02-23 20:34:25.909467 | orchestrator | 2026-02-23 20:34:25.909472 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:34:25.909477 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-23 20:34:25.909483 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-23 20:34:25.909489 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:34:25.909494 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:34:25.909499 | orchestrator | 2026-02-23 20:34:25.909504 | orchestrator | 2026-02-23 20:34:25.909509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:34:25.909515 | orchestrator | Monday 23 February 2026 20:34:24 +0000 (0:00:02.325) 0:02:13.406 ******* 2026-02-23 20:34:25.909519 | orchestrator | =============================================================================== 2026-02-23 20:34:25.909525 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.32s 2026-02-23 20:34:25.909529 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.67s 2026-02-23 20:34:25.909534 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.71s 2026-02-23 20:34:25.909539 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.36s 2026-02-23 20:34:25.909544 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.14s 2026-02-23 20:34:25.909549 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.97s 2026-02-23 20:34:25.909558 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.89s 2026-02-23 20:34:25.909562 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.33s 2026-02-23 20:34:25.909567 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.20s 2026-02-23 20:34:25.909572 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.98s 2026-02-23 20:34:25.909576 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.97s 2026-02-23 20:34:25.909581 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.66s 2026-02-23 20:34:25.909601 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.57s 2026-02-23 20:34:25.909610 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.41s 2026-02-23 20:34:25.909615 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.39s 2026-02-23 20:34:25.909619 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.25s 2026-02-23 20:34:25.909624 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2026-02-23 20:34:25.909632 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.02s 2026-02-23 20:34:25.909637 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-02-23 20:34:25.909641 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2026-02-23 20:34:25.909647 | orchestrator | 2026-02-23 20:34:25 | INFO  | Task dbdd81f0-7fb1-410a-94de-02b083b3f3a0 is in state SUCCESS 2026-02-23 20:34:25.909652 | orchestrator | 2026-02-23 20:34:25 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:25.909657 | orchestrator | 2026-02-23 20:34:25 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:25.909661 | orchestrator | 2026-02-23 20:34:25 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:25.909666 | orchestrator | 2026-02-23 20:34:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:28.944364 | orchestrator | 2026-02-23 20:34:28 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:28.944573 | orchestrator | 2026-02-23 20:34:28 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:28.945539 | orchestrator | 2026-02-23 20:34:28 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:28.945563 | orchestrator | 2026-02-23 20:34:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:31.970436 | orchestrator | 2026-02-23 20:34:31 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:31.971354 | orchestrator | 2026-02-23 20:34:31 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:31.972351 | orchestrator | 2026-02-23 20:34:31 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:31.972378 | orchestrator | 2026-02-23 20:34:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:35.010158 | orchestrator | 2026-02-23 20:34:35 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:35.012260 | orchestrator | 2026-02-23 20:34:35 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:35.026095 | orchestrator | 2026-02-23 20:34:35 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:35.026148 | orchestrator | 2026-02-23 20:34:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:38.056747 | orchestrator | 2026-02-23 20:34:38 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:38.056913 | orchestrator | 2026-02-23 20:34:38 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:38.057606 | orchestrator | 2026-02-23 20:34:38 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:38.057636 | orchestrator | 2026-02-23 20:34:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:41.118102 | orchestrator | 2026-02-23 20:34:41 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:41.118158 | orchestrator | 2026-02-23 20:34:41 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:41.118165 | orchestrator | 2026-02-23 20:34:41 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:41.118169 | orchestrator | 2026-02-23 20:34:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:44.119888 | orchestrator | 2026-02-23 20:34:44 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:44.121085 | orchestrator | 2026-02-23 20:34:44 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:44.121977 | orchestrator | 2026-02-23 20:34:44 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:44.122042 | orchestrator | 2026-02-23 20:34:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:47.154139 | orchestrator | 2026-02-23 20:34:47 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:47.154435 | orchestrator | 2026-02-23 20:34:47 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:47.155134 | orchestrator | 2026-02-23 20:34:47 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:47.155172 | orchestrator | 2026-02-23 20:34:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:50.184723 | orchestrator | 2026-02-23 20:34:50 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:50.185256 | orchestrator | 2026-02-23 20:34:50 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:50.187961 | orchestrator | 2026-02-23 20:34:50 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:50.187994 | orchestrator | 2026-02-23 20:34:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:53.221282 | orchestrator | 2026-02-23 20:34:53 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:53.224238 | orchestrator | 2026-02-23 20:34:53 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:53.227014 | orchestrator | 2026-02-23 20:34:53 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:53.227432 | orchestrator | 2026-02-23 20:34:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:56.277506 | orchestrator | 2026-02-23 20:34:56 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:56.280754 | orchestrator | 2026-02-23 20:34:56 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:56.282951 | orchestrator | 2026-02-23 20:34:56 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:56.282985 | orchestrator | 2026-02-23 20:34:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:34:59.323476 | orchestrator | 2026-02-23 20:34:59 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:34:59.323834 | orchestrator | 2026-02-23 20:34:59 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:34:59.324541 | orchestrator | 2026-02-23 20:34:59 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:34:59.324586 | orchestrator | 2026-02-23 20:34:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:02.371189 | orchestrator | 2026-02-23 20:35:02 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:02.371864 | orchestrator | 2026-02-23 20:35:02 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:35:02.372519 | orchestrator | 2026-02-23 20:35:02 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:02.372545 | orchestrator | 2026-02-23 20:35:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:05.401104 | orchestrator | 2026-02-23 20:35:05 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:05.401901 | orchestrator | 2026-02-23 20:35:05 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state STARTED 2026-02-23 20:35:05.403486 | orchestrator | 2026-02-23 20:35:05 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:05.403534 | orchestrator | 2026-02-23 20:35:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:08.455233 | orchestrator | 2026-02-23 20:35:08 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:08.458820 | orchestrator | 2026-02-23 20:35:08 | INFO  | Task c4847090-8054-425b-86dd-a8b3d6d8a0d7 is in state SUCCESS 2026-02-23 20:35:08.460262 | orchestrator | 2026-02-23 20:35:08.460304 | orchestrator | 2026-02-23 20:35:08.460311 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:35:08.460319 | orchestrator | 2026-02-23 20:35:08.460325 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:35:08.460332 | orchestrator | Monday 23 February 2026 20:33:04 +0000 (0:00:00.330) 0:00:00.330 ******* 2026-02-23 20:35:08.460339 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:35:08.460346 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:35:08.460352 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:35:08.460358 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.460364 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.460370 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.460377 | orchestrator | 2026-02-23 20:35:08.460383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:35:08.460389 | orchestrator | Monday 23 February 2026 20:33:04 +0000 (0:00:00.648) 0:00:00.978 ******* 2026-02-23 20:35:08.460396 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-23 20:35:08.460402 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-23 20:35:08.460408 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-23 20:35:08.460424 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-23 20:35:08.460431 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-23 20:35:08.460438 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-23 20:35:08.460444 | orchestrator | 2026-02-23 20:35:08.460450 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-23 20:35:08.460457 | orchestrator | 2026-02-23 20:35:08.460596 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-23 20:35:08.460691 | orchestrator | Monday 23 February 2026 20:33:05 +0000 (0:00:00.929) 0:00:01.907 ******* 2026-02-23 20:35:08.460703 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:35:08.460711 | orchestrator | 2026-02-23 20:35:08.460717 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-23 20:35:08.460724 | orchestrator | Monday 23 February 2026 20:33:06 +0000 (0:00:00.988) 0:00:02.896 ******* 2026-02-23 20:35:08.460748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460860 | orchestrator | 2026-02-23 20:35:08.460884 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-23 20:35:08.460896 | orchestrator | Monday 23 February 2026 20:33:07 +0000 (0:00:01.127) 0:00:04.023 ******* 2026-02-23 20:35:08.460907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.460956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461639 | orchestrator | 2026-02-23 20:35:08.461651 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-23 20:35:08.461660 | orchestrator | Monday 23 February 2026 20:33:09 +0000 (0:00:01.403) 0:00:05.426 ******* 2026-02-23 20:35:08.461667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461746 | orchestrator | 2026-02-23 20:35:08.461756 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-23 20:35:08.461765 | orchestrator | Monday 23 February 2026 20:33:10 +0000 (0:00:01.205) 0:00:06.632 ******* 2026-02-23 20:35:08.461775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461838 | orchestrator | 2026-02-23 20:35:08.461856 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-23 20:35:08.461868 | orchestrator | Monday 23 February 2026 20:33:12 +0000 (0:00:01.773) 0:00:08.406 ******* 2026-02-23 20:35:08.461879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.461950 | orchestrator | 2026-02-23 20:35:08.461960 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-23 20:35:08.461970 | orchestrator | Monday 23 February 2026 20:33:13 +0000 (0:00:01.275) 0:00:09.681 ******* 2026-02-23 20:35:08.461980 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:35:08.461990 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:35:08.462000 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:35:08.462009 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.462082 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.462094 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.462104 | orchestrator | 2026-02-23 20:35:08.462163 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-23 20:35:08.462181 | orchestrator | Monday 23 February 2026 20:33:16 +0000 (0:00:03.188) 0:00:12.870 ******* 2026-02-23 20:35:08.462192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-23 20:35:08.462203 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-23 20:35:08.462212 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-23 20:35:08.462222 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-23 20:35:08.462231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-23 20:35:08.462241 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-23 20:35:08.462259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-23 20:35:08.462269 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-23 20:35:08.462288 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-23 20:35:08.462299 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-23 20:35:08.462310 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-23 20:35:08.462321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-23 20:35:08.462330 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-23 20:35:08.462341 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-23 20:35:08.462351 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-23 20:35:08.462366 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-23 20:35:08.462377 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-23 20:35:08.462387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-23 20:35:08.462397 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-23 20:35:08.462408 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-23 20:35:08.462418 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-23 20:35:08.462427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-23 20:35:08.462437 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-23 20:35:08.462447 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-23 20:35:08.462458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-23 20:35:08.462468 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-23 20:35:08.462479 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-23 20:35:08.462489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-23 20:35:08.462499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-23 20:35:08.462510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-23 20:35:08.462521 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-23 20:35:08.462532 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-23 20:35:08.462540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-23 20:35:08.462642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-23 20:35:08.462654 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-23 20:35:08.462660 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-23 20:35:08.462675 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-23 20:35:08.462681 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-23 20:35:08.462688 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-23 20:35:08.462694 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-23 20:35:08.462700 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-23 20:35:08.462707 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-23 20:35:08.462713 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-23 20:35:08.462720 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-23 20:35:08.462735 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-23 20:35:08.462742 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-23 20:35:08.462748 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-23 20:35:08.462755 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-23 20:35:08.462761 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-23 20:35:08.462768 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-23 20:35:08.462775 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-23 20:35:08.462785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-23 20:35:08.462792 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-23 20:35:08.462799 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-23 20:35:08.462805 | orchestrator | 2026-02-23 20:35:08.462812 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-23 20:35:08.462819 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:18.484) 0:00:31.354 ******* 2026-02-23 20:35:08.462825 | orchestrator | 2026-02-23 20:35:08.462832 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-23 20:35:08.462847 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:00.061) 0:00:31.416 ******* 2026-02-23 20:35:08.462854 | orchestrator | 2026-02-23 20:35:08.462861 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-23 20:35:08.462868 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:00.059) 0:00:31.475 ******* 2026-02-23 20:35:08.462874 | orchestrator | 2026-02-23 20:35:08.462881 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-23 20:35:08.462888 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:00.057) 0:00:31.533 ******* 2026-02-23 20:35:08.462894 | orchestrator | 2026-02-23 20:35:08.462903 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-23 20:35:08.462913 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:00.071) 0:00:31.604 ******* 2026-02-23 20:35:08.462938 | orchestrator | 2026-02-23 20:35:08.462952 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-23 20:35:08.462963 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:00.058) 0:00:31.663 ******* 2026-02-23 20:35:08.462972 | orchestrator | 2026-02-23 20:35:08.462982 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-23 20:35:08.462991 | orchestrator | Monday 23 February 2026 20:33:35 +0000 (0:00:00.058) 0:00:31.721 ******* 2026-02-23 20:35:08.463002 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:35:08.463014 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.463025 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:35:08.463035 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:35:08.463046 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.463057 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.463067 | orchestrator | 2026-02-23 20:35:08.463076 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-23 20:35:08.463082 | orchestrator | Monday 23 February 2026 20:33:37 +0000 (0:00:01.719) 0:00:33.441 ******* 2026-02-23 20:35:08.463089 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.463097 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:35:08.463108 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:35:08.463122 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.463135 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.463145 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:35:08.463155 | orchestrator | 2026-02-23 20:35:08.463166 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-23 20:35:08.463189 | orchestrator | 2026-02-23 20:35:08.463200 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-23 20:35:08.463209 | orchestrator | Monday 23 February 2026 20:34:00 +0000 (0:00:23.511) 0:00:56.952 ******* 2026-02-23 20:35:08.463228 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:35:08.463240 | orchestrator | 2026-02-23 20:35:08.463250 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-23 20:35:08.463260 | orchestrator | Monday 23 February 2026 20:34:01 +0000 (0:00:00.764) 0:00:57.717 ******* 2026-02-23 20:35:08.463270 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:35:08.463280 | orchestrator | 2026-02-23 20:35:08.463300 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-23 20:35:08.463310 | orchestrator | Monday 23 February 2026 20:34:02 +0000 (0:00:00.925) 0:00:58.643 ******* 2026-02-23 20:35:08.463320 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.463329 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.463339 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.463350 | orchestrator | 2026-02-23 20:35:08.463361 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-23 20:35:08.463371 | orchestrator | Monday 23 February 2026 20:34:03 +0000 (0:00:00.971) 0:00:59.614 ******* 2026-02-23 20:35:08.463382 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.463393 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.463404 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.463421 | orchestrator | 2026-02-23 20:35:08.463428 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-23 20:35:08.463434 | orchestrator | Monday 23 February 2026 20:34:03 +0000 (0:00:00.264) 0:00:59.879 ******* 2026-02-23 20:35:08.463441 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.463447 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.463453 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.463459 | orchestrator | 2026-02-23 20:35:08.463466 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-23 20:35:08.463472 | orchestrator | Monday 23 February 2026 20:34:04 +0000 (0:00:00.286) 0:01:00.165 ******* 2026-02-23 20:35:08.463478 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.463491 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.463497 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.463504 | orchestrator | 2026-02-23 20:35:08.463510 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-23 20:35:08.463516 | orchestrator | Monday 23 February 2026 20:34:04 +0000 (0:00:00.278) 0:01:00.444 ******* 2026-02-23 20:35:08.463522 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.463531 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.463541 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.463593 | orchestrator | 2026-02-23 20:35:08.463614 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-23 20:35:08.463625 | orchestrator | Monday 23 February 2026 20:34:04 +0000 (0:00:00.421) 0:01:00.866 ******* 2026-02-23 20:35:08.463635 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463645 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463655 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463665 | orchestrator | 2026-02-23 20:35:08.463676 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-23 20:35:08.463685 | orchestrator | Monday 23 February 2026 20:34:04 +0000 (0:00:00.258) 0:01:01.125 ******* 2026-02-23 20:35:08.463691 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463697 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463703 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463710 | orchestrator | 2026-02-23 20:35:08.463721 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-23 20:35:08.463731 | orchestrator | Monday 23 February 2026 20:34:05 +0000 (0:00:00.320) 0:01:01.445 ******* 2026-02-23 20:35:08.463742 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463751 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463760 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463770 | orchestrator | 2026-02-23 20:35:08.463779 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-23 20:35:08.463790 | orchestrator | Monday 23 February 2026 20:34:05 +0000 (0:00:00.264) 0:01:01.709 ******* 2026-02-23 20:35:08.463796 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463801 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463806 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463812 | orchestrator | 2026-02-23 20:35:08.463817 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-23 20:35:08.463824 | orchestrator | Monday 23 February 2026 20:34:05 +0000 (0:00:00.337) 0:01:02.047 ******* 2026-02-23 20:35:08.463834 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463843 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463851 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463860 | orchestrator | 2026-02-23 20:35:08.463870 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-23 20:35:08.463879 | orchestrator | Monday 23 February 2026 20:34:06 +0000 (0:00:00.268) 0:01:02.316 ******* 2026-02-23 20:35:08.463888 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463898 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463907 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463916 | orchestrator | 2026-02-23 20:35:08.463926 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-23 20:35:08.463935 | orchestrator | Monday 23 February 2026 20:34:06 +0000 (0:00:00.263) 0:01:02.579 ******* 2026-02-23 20:35:08.463944 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463954 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.463964 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.463973 | orchestrator | 2026-02-23 20:35:08.463980 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-23 20:35:08.463986 | orchestrator | Monday 23 February 2026 20:34:06 +0000 (0:00:00.248) 0:01:02.827 ******* 2026-02-23 20:35:08.463991 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.463996 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464007 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464013 | orchestrator | 2026-02-23 20:35:08.464025 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-23 20:35:08.464038 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:00.386) 0:01:03.214 ******* 2026-02-23 20:35:08.464047 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464056 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464064 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464073 | orchestrator | 2026-02-23 20:35:08.464082 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-23 20:35:08.464092 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:00.258) 0:01:03.473 ******* 2026-02-23 20:35:08.464101 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464110 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464119 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464125 | orchestrator | 2026-02-23 20:35:08.464130 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-23 20:35:08.464137 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:00.282) 0:01:03.755 ******* 2026-02-23 20:35:08.464147 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464156 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464165 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464174 | orchestrator | 2026-02-23 20:35:08.464182 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-23 20:35:08.464192 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:00.267) 0:01:04.023 ******* 2026-02-23 20:35:08.464201 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464211 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464227 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464236 | orchestrator | 2026-02-23 20:35:08.464245 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-23 20:35:08.464254 | orchestrator | Monday 23 February 2026 20:34:08 +0000 (0:00:00.306) 0:01:04.329 ******* 2026-02-23 20:35:08.464264 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:35:08.464273 | orchestrator | 2026-02-23 20:35:08.464283 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-23 20:35:08.464292 | orchestrator | Monday 23 February 2026 20:34:08 +0000 (0:00:00.643) 0:01:04.973 ******* 2026-02-23 20:35:08.464301 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.464310 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.464319 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.464329 | orchestrator | 2026-02-23 20:35:08.464336 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-23 20:35:08.464345 | orchestrator | Monday 23 February 2026 20:34:09 +0000 (0:00:00.418) 0:01:05.391 ******* 2026-02-23 20:35:08.464354 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.464364 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.464373 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.464381 | orchestrator | 2026-02-23 20:35:08.464396 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-23 20:35:08.464405 | orchestrator | Monday 23 February 2026 20:34:09 +0000 (0:00:00.438) 0:01:05.830 ******* 2026-02-23 20:35:08.464413 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464422 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464431 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464439 | orchestrator | 2026-02-23 20:35:08.464448 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-23 20:35:08.464457 | orchestrator | Monday 23 February 2026 20:34:10 +0000 (0:00:00.427) 0:01:06.257 ******* 2026-02-23 20:35:08.464466 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464475 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464484 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464492 | orchestrator | 2026-02-23 20:35:08.464509 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-23 20:35:08.464518 | orchestrator | Monday 23 February 2026 20:34:10 +0000 (0:00:00.306) 0:01:06.564 ******* 2026-02-23 20:35:08.464526 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464534 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464543 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464569 | orchestrator | 2026-02-23 20:35:08.464579 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-23 20:35:08.464588 | orchestrator | Monday 23 February 2026 20:34:10 +0000 (0:00:00.285) 0:01:06.850 ******* 2026-02-23 20:35:08.464597 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464607 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464616 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464625 | orchestrator | 2026-02-23 20:35:08.464634 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-23 20:35:08.464642 | orchestrator | Monday 23 February 2026 20:34:10 +0000 (0:00:00.289) 0:01:07.140 ******* 2026-02-23 20:35:08.464650 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464657 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464665 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464673 | orchestrator | 2026-02-23 20:35:08.464682 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-23 20:35:08.464691 | orchestrator | Monday 23 February 2026 20:34:11 +0000 (0:00:00.426) 0:01:07.567 ******* 2026-02-23 20:35:08.464699 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.464707 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.464716 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.464723 | orchestrator | 2026-02-23 20:35:08.464731 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-23 20:35:08.464739 | orchestrator | Monday 23 February 2026 20:34:11 +0000 (0:00:00.270) 0:01:07.837 ******* 2026-02-23 20:35:08.464748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464855 | orchestrator | 2026-02-23 20:35:08.464864 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-23 20:35:08.464872 | orchestrator | Monday 23 February 2026 20:34:13 +0000 (0:00:01.358) 0:01:09.196 ******* 2026-02-23 20:35:08.464880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.464972 | orchestrator | 2026-02-23 20:35:08.464980 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-23 20:35:08.464988 | orchestrator | Monday 23 February 2026 20:34:16 +0000 (0:00:03.685) 0:01:12.881 ******* 2026-02-23 20:35:08.464997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465090 | orchestrator | 2026-02-23 20:35:08.465098 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-23 20:35:08.465106 | orchestrator | Monday 23 February 2026 20:34:19 +0000 (0:00:02.853) 0:01:15.734 ******* 2026-02-23 20:35:08.465116 | orchestrator | 2026-02-23 20:35:08.465125 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-23 20:35:08.465133 | orchestrator | Monday 23 February 2026 20:34:19 +0000 (0:00:00.084) 0:01:15.819 ******* 2026-02-23 20:35:08.465141 | orchestrator | 2026-02-23 20:35:08.465149 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-23 20:35:08.465157 | orchestrator | Monday 23 February 2026 20:34:19 +0000 (0:00:00.066) 0:01:15.886 ******* 2026-02-23 20:35:08.465165 | orchestrator | 2026-02-23 20:35:08.465173 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-23 20:35:08.465181 | orchestrator | Monday 23 February 2026 20:34:19 +0000 (0:00:00.064) 0:01:15.951 ******* 2026-02-23 20:35:08.465190 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.465199 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.465208 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.465217 | orchestrator | 2026-02-23 20:35:08.465225 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-23 20:35:08.465233 | orchestrator | Monday 23 February 2026 20:34:27 +0000 (0:00:07.655) 0:01:23.606 ******* 2026-02-23 20:35:08.465241 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.465250 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.465258 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.465267 | orchestrator | 2026-02-23 20:35:08.465275 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-23 20:35:08.465284 | orchestrator | Monday 23 February 2026 20:34:29 +0000 (0:00:02.486) 0:01:26.093 ******* 2026-02-23 20:35:08.465293 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.465302 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.465310 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.465318 | orchestrator | 2026-02-23 20:35:08.465327 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-23 20:35:08.465335 | orchestrator | Monday 23 February 2026 20:34:32 +0000 (0:00:02.341) 0:01:28.435 ******* 2026-02-23 20:35:08.465343 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.465352 | orchestrator | 2026-02-23 20:35:08.465360 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-23 20:35:08.465375 | orchestrator | Monday 23 February 2026 20:34:32 +0000 (0:00:00.119) 0:01:28.555 ******* 2026-02-23 20:35:08.465385 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.465394 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.465402 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.465410 | orchestrator | 2026-02-23 20:35:08.465419 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-23 20:35:08.465427 | orchestrator | Monday 23 February 2026 20:34:33 +0000 (0:00:00.615) 0:01:29.170 ******* 2026-02-23 20:35:08.465436 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.465445 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.465454 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.465463 | orchestrator | 2026-02-23 20:35:08.465472 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-23 20:35:08.465481 | orchestrator | Monday 23 February 2026 20:34:33 +0000 (0:00:00.538) 0:01:29.709 ******* 2026-02-23 20:35:08.465489 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.465498 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.465506 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.465515 | orchestrator | 2026-02-23 20:35:08.465523 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-23 20:35:08.465532 | orchestrator | Monday 23 February 2026 20:34:34 +0000 (0:00:00.634) 0:01:30.344 ******* 2026-02-23 20:35:08.465541 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.465594 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.465604 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.465613 | orchestrator | 2026-02-23 20:35:08.465622 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-23 20:35:08.465632 | orchestrator | Monday 23 February 2026 20:34:34 +0000 (0:00:00.661) 0:01:31.006 ******* 2026-02-23 20:35:08.465641 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.465649 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.465663 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.465672 | orchestrator | 2026-02-23 20:35:08.465681 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-23 20:35:08.465690 | orchestrator | Monday 23 February 2026 20:34:35 +0000 (0:00:00.711) 0:01:31.717 ******* 2026-02-23 20:35:08.465699 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.465709 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.465721 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.465731 | orchestrator | 2026-02-23 20:35:08.465740 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-23 20:35:08.465749 | orchestrator | Monday 23 February 2026 20:34:36 +0000 (0:00:00.658) 0:01:32.376 ******* 2026-02-23 20:35:08.465758 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.465767 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.465776 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.465785 | orchestrator | 2026-02-23 20:35:08.465795 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-23 20:35:08.465804 | orchestrator | Monday 23 February 2026 20:34:36 +0000 (0:00:00.263) 0:01:32.640 ******* 2026-02-23 20:35:08.465818 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465828 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465838 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465853 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465872 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465889 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465910 | orchestrator | 2026-02-23 20:35:08.465918 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-23 20:35:08.465926 | orchestrator | Monday 23 February 2026 20:34:37 +0000 (0:00:01.296) 0:01:33.937 ******* 2026-02-23 20:35:08.465934 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465945 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465953 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.465993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466002 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466054 | orchestrator | 2026-02-23 20:35:08.466063 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-23 20:35:08.466071 | orchestrator | Monday 23 February 2026 20:34:41 +0000 (0:00:03.746) 0:01:37.683 ******* 2026-02-23 20:35:08.466086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466095 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466139 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:35:08.466173 | orchestrator | 2026-02-23 20:35:08.466181 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-23 20:35:08.466189 | orchestrator | Monday 23 February 2026 20:34:44 +0000 (0:00:02.642) 0:01:40.326 ******* 2026-02-23 20:35:08.466197 | orchestrator | 2026-02-23 20:35:08.466205 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-23 20:35:08.466213 | orchestrator | Monday 23 February 2026 20:34:44 +0000 (0:00:00.062) 0:01:40.389 ******* 2026-02-23 20:35:08.466221 | orchestrator | 2026-02-23 20:35:08.466230 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-23 20:35:08.466238 | orchestrator | Monday 23 February 2026 20:34:44 +0000 (0:00:00.061) 0:01:40.450 ******* 2026-02-23 20:35:08.466246 | orchestrator | 2026-02-23 20:35:08.466254 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-23 20:35:08.466262 | orchestrator | Monday 23 February 2026 20:34:44 +0000 (0:00:00.062) 0:01:40.513 ******* 2026-02-23 20:35:08.466270 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.466279 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.466287 | orchestrator | 2026-02-23 20:35:08.466301 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-23 20:35:08.466310 | orchestrator | Monday 23 February 2026 20:34:50 +0000 (0:00:06.427) 0:01:46.941 ******* 2026-02-23 20:35:08.466324 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.466332 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.466339 | orchestrator | 2026-02-23 20:35:08.466347 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-23 20:35:08.466355 | orchestrator | Monday 23 February 2026 20:34:56 +0000 (0:00:06.086) 0:01:53.027 ******* 2026-02-23 20:35:08.466363 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:35:08.466370 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:35:08.466377 | orchestrator | 2026-02-23 20:35:08.466385 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-23 20:35:08.466393 | orchestrator | Monday 23 February 2026 20:35:03 +0000 (0:00:06.432) 0:01:59.459 ******* 2026-02-23 20:35:08.466401 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:35:08.466410 | orchestrator | 2026-02-23 20:35:08.466417 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-23 20:35:08.466426 | orchestrator | Monday 23 February 2026 20:35:03 +0000 (0:00:00.151) 0:01:59.611 ******* 2026-02-23 20:35:08.466438 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.466447 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.466455 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.466463 | orchestrator | 2026-02-23 20:35:08.466472 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-23 20:35:08.466480 | orchestrator | Monday 23 February 2026 20:35:04 +0000 (0:00:00.721) 0:02:00.332 ******* 2026-02-23 20:35:08.466487 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.466495 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.466504 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.466512 | orchestrator | 2026-02-23 20:35:08.466520 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-23 20:35:08.466528 | orchestrator | Monday 23 February 2026 20:35:04 +0000 (0:00:00.629) 0:02:00.962 ******* 2026-02-23 20:35:08.466537 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.466545 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.466570 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.466578 | orchestrator | 2026-02-23 20:35:08.466585 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-23 20:35:08.466590 | orchestrator | Monday 23 February 2026 20:35:05 +0000 (0:00:00.717) 0:02:01.679 ******* 2026-02-23 20:35:08.466595 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:35:08.466600 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:35:08.466605 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:35:08.466609 | orchestrator | 2026-02-23 20:35:08.466614 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-23 20:35:08.466619 | orchestrator | Monday 23 February 2026 20:35:06 +0000 (0:00:00.605) 0:02:02.285 ******* 2026-02-23 20:35:08.466624 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.466629 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.466634 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.466639 | orchestrator | 2026-02-23 20:35:08.466644 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-23 20:35:08.466649 | orchestrator | Monday 23 February 2026 20:35:06 +0000 (0:00:00.836) 0:02:03.121 ******* 2026-02-23 20:35:08.466653 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:35:08.466658 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:35:08.466663 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:35:08.466668 | orchestrator | 2026-02-23 20:35:08.466673 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:35:08.466678 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-23 20:35:08.466684 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-23 20:35:08.466689 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-23 20:35:08.466698 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:35:08.466703 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:35:08.466708 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:35:08.466713 | orchestrator | 2026-02-23 20:35:08.466718 | orchestrator | 2026-02-23 20:35:08.466723 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:35:08.466728 | orchestrator | Monday 23 February 2026 20:35:07 +0000 (0:00:00.920) 0:02:04.042 ******* 2026-02-23 20:35:08.466733 | orchestrator | =============================================================================== 2026-02-23 20:35:08.466738 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.51s 2026-02-23 20:35:08.466743 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.48s 2026-02-23 20:35:08.466748 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.08s 2026-02-23 20:35:08.466752 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.77s 2026-02-23 20:35:08.466757 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.57s 2026-02-23 20:35:08.466762 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.75s 2026-02-23 20:35:08.466767 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.69s 2026-02-23 20:35:08.466777 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.19s 2026-02-23 20:35:08.466782 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.85s 2026-02-23 20:35:08.466787 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.64s 2026-02-23 20:35:08.466792 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.77s 2026-02-23 20:35:08.466796 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.72s 2026-02-23 20:35:08.466801 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.40s 2026-02-23 20:35:08.466806 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2026-02-23 20:35:08.466811 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.30s 2026-02-23 20:35:08.466816 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.28s 2026-02-23 20:35:08.466821 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.21s 2026-02-23 20:35:08.466829 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.13s 2026-02-23 20:35:08.466834 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 0.99s 2026-02-23 20:35:08.466839 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 0.97s 2026-02-23 20:35:08.466844 | orchestrator | 2026-02-23 20:35:08 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:08.466849 | orchestrator | 2026-02-23 20:35:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:11.510873 | orchestrator | 2026-02-23 20:35:11 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:11.511758 | orchestrator | 2026-02-23 20:35:11 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:11.511823 | orchestrator | 2026-02-23 20:35:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:14.558714 | orchestrator | 2026-02-23 20:35:14 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:14.560057 | orchestrator | 2026-02-23 20:35:14 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:14.560107 | orchestrator | 2026-02-23 20:35:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:17.599142 | orchestrator | 2026-02-23 20:35:17 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:17.601179 | orchestrator | 2026-02-23 20:35:17 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:17.601483 | orchestrator | 2026-02-23 20:35:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:20.656164 | orchestrator | 2026-02-23 20:35:20 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:20.657274 | orchestrator | 2026-02-23 20:35:20 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:20.657303 | orchestrator | 2026-02-23 20:35:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:23.691039 | orchestrator | 2026-02-23 20:35:23 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:23.693786 | orchestrator | 2026-02-23 20:35:23 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:23.695781 | orchestrator | 2026-02-23 20:35:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:26.737178 | orchestrator | 2026-02-23 20:35:26 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:26.738459 | orchestrator | 2026-02-23 20:35:26 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:26.738514 | orchestrator | 2026-02-23 20:35:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:29.789372 | orchestrator | 2026-02-23 20:35:29 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:29.790793 | orchestrator | 2026-02-23 20:35:29 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:29.790849 | orchestrator | 2026-02-23 20:35:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:32.826580 | orchestrator | 2026-02-23 20:35:32 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:32.828353 | orchestrator | 2026-02-23 20:35:32 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:32.828409 | orchestrator | 2026-02-23 20:35:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:35.867217 | orchestrator | 2026-02-23 20:35:35 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:35.868126 | orchestrator | 2026-02-23 20:35:35 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:35.868172 | orchestrator | 2026-02-23 20:35:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:38.905789 | orchestrator | 2026-02-23 20:35:38 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:38.910281 | orchestrator | 2026-02-23 20:35:38 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:38.910323 | orchestrator | 2026-02-23 20:35:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:41.945953 | orchestrator | 2026-02-23 20:35:41 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:41.946153 | orchestrator | 2026-02-23 20:35:41 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:41.946173 | orchestrator | 2026-02-23 20:35:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:44.982058 | orchestrator | 2026-02-23 20:35:44 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:44.983181 | orchestrator | 2026-02-23 20:35:44 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:44.983308 | orchestrator | 2026-02-23 20:35:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:48.021372 | orchestrator | 2026-02-23 20:35:48 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:48.021502 | orchestrator | 2026-02-23 20:35:48 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:48.021958 | orchestrator | 2026-02-23 20:35:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:51.057684 | orchestrator | 2026-02-23 20:35:51 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:51.058175 | orchestrator | 2026-02-23 20:35:51 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:51.058211 | orchestrator | 2026-02-23 20:35:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:54.084026 | orchestrator | 2026-02-23 20:35:54 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:54.084789 | orchestrator | 2026-02-23 20:35:54 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:54.084840 | orchestrator | 2026-02-23 20:35:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:35:57.120846 | orchestrator | 2026-02-23 20:35:57 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:35:57.123006 | orchestrator | 2026-02-23 20:35:57 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:35:57.123047 | orchestrator | 2026-02-23 20:35:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:00.155831 | orchestrator | 2026-02-23 20:36:00 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:00.157121 | orchestrator | 2026-02-23 20:36:00 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:00.157430 | orchestrator | 2026-02-23 20:36:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:03.210712 | orchestrator | 2026-02-23 20:36:03 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:03.212063 | orchestrator | 2026-02-23 20:36:03 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:03.212309 | orchestrator | 2026-02-23 20:36:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:06.244676 | orchestrator | 2026-02-23 20:36:06 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:06.247416 | orchestrator | 2026-02-23 20:36:06 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:06.247481 | orchestrator | 2026-02-23 20:36:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:09.291940 | orchestrator | 2026-02-23 20:36:09 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:09.294332 | orchestrator | 2026-02-23 20:36:09 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:09.294403 | orchestrator | 2026-02-23 20:36:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:12.351152 | orchestrator | 2026-02-23 20:36:12 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:12.351242 | orchestrator | 2026-02-23 20:36:12 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:12.351251 | orchestrator | 2026-02-23 20:36:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:15.399601 | orchestrator | 2026-02-23 20:36:15 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:15.400171 | orchestrator | 2026-02-23 20:36:15 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:15.400203 | orchestrator | 2026-02-23 20:36:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:18.450365 | orchestrator | 2026-02-23 20:36:18 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:18.452536 | orchestrator | 2026-02-23 20:36:18 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:18.452583 | orchestrator | 2026-02-23 20:36:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:21.499226 | orchestrator | 2026-02-23 20:36:21 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:21.499994 | orchestrator | 2026-02-23 20:36:21 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:21.500027 | orchestrator | 2026-02-23 20:36:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:24.543810 | orchestrator | 2026-02-23 20:36:24 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:24.544851 | orchestrator | 2026-02-23 20:36:24 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:24.544976 | orchestrator | 2026-02-23 20:36:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:27.586398 | orchestrator | 2026-02-23 20:36:27 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:27.586694 | orchestrator | 2026-02-23 20:36:27 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:27.586894 | orchestrator | 2026-02-23 20:36:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:30.632210 | orchestrator | 2026-02-23 20:36:30 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:30.634584 | orchestrator | 2026-02-23 20:36:30 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:30.634638 | orchestrator | 2026-02-23 20:36:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:33.678284 | orchestrator | 2026-02-23 20:36:33 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:33.679933 | orchestrator | 2026-02-23 20:36:33 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:33.680300 | orchestrator | 2026-02-23 20:36:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:36.714792 | orchestrator | 2026-02-23 20:36:36 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:36.715776 | orchestrator | 2026-02-23 20:36:36 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:36.715819 | orchestrator | 2026-02-23 20:36:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:39.757175 | orchestrator | 2026-02-23 20:36:39 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:39.760694 | orchestrator | 2026-02-23 20:36:39 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:39.761000 | orchestrator | 2026-02-23 20:36:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:42.807057 | orchestrator | 2026-02-23 20:36:42 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:42.809035 | orchestrator | 2026-02-23 20:36:42 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:42.809157 | orchestrator | 2026-02-23 20:36:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:45.851700 | orchestrator | 2026-02-23 20:36:45 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:45.853153 | orchestrator | 2026-02-23 20:36:45 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:45.853228 | orchestrator | 2026-02-23 20:36:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:48.889870 | orchestrator | 2026-02-23 20:36:48 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:48.891774 | orchestrator | 2026-02-23 20:36:48 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:48.891919 | orchestrator | 2026-02-23 20:36:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:51.919542 | orchestrator | 2026-02-23 20:36:51 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:51.920196 | orchestrator | 2026-02-23 20:36:51 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:51.920226 | orchestrator | 2026-02-23 20:36:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:54.970160 | orchestrator | 2026-02-23 20:36:54 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:54.972536 | orchestrator | 2026-02-23 20:36:54 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:54.972582 | orchestrator | 2026-02-23 20:36:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:36:58.016382 | orchestrator | 2026-02-23 20:36:58 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:36:58.016489 | orchestrator | 2026-02-23 20:36:58 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:36:58.016498 | orchestrator | 2026-02-23 20:36:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:01.055822 | orchestrator | 2026-02-23 20:37:01 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:01.057869 | orchestrator | 2026-02-23 20:37:01 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:01.058812 | orchestrator | 2026-02-23 20:37:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:04.092703 | orchestrator | 2026-02-23 20:37:04 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:04.094000 | orchestrator | 2026-02-23 20:37:04 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:04.094369 | orchestrator | 2026-02-23 20:37:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:07.144866 | orchestrator | 2026-02-23 20:37:07 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:07.146909 | orchestrator | 2026-02-23 20:37:07 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:07.146953 | orchestrator | 2026-02-23 20:37:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:10.184313 | orchestrator | 2026-02-23 20:37:10 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:10.187710 | orchestrator | 2026-02-23 20:37:10 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:10.187914 | orchestrator | 2026-02-23 20:37:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:13.234976 | orchestrator | 2026-02-23 20:37:13 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:13.237248 | orchestrator | 2026-02-23 20:37:13 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:13.237317 | orchestrator | 2026-02-23 20:37:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:16.283685 | orchestrator | 2026-02-23 20:37:16 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:16.283863 | orchestrator | 2026-02-23 20:37:16 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:16.284050 | orchestrator | 2026-02-23 20:37:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:19.328503 | orchestrator | 2026-02-23 20:37:19 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:19.332595 | orchestrator | 2026-02-23 20:37:19 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:19.332696 | orchestrator | 2026-02-23 20:37:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:22.371495 | orchestrator | 2026-02-23 20:37:22 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:22.371593 | orchestrator | 2026-02-23 20:37:22 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:22.371609 | orchestrator | 2026-02-23 20:37:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:25.418208 | orchestrator | 2026-02-23 20:37:25 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:25.420306 | orchestrator | 2026-02-23 20:37:25 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:25.420984 | orchestrator | 2026-02-23 20:37:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:28.470488 | orchestrator | 2026-02-23 20:37:28 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:28.474329 | orchestrator | 2026-02-23 20:37:28 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:28.474670 | orchestrator | 2026-02-23 20:37:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:31.519853 | orchestrator | 2026-02-23 20:37:31 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:31.523466 | orchestrator | 2026-02-23 20:37:31 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:31.523517 | orchestrator | 2026-02-23 20:37:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:34.562482 | orchestrator | 2026-02-23 20:37:34 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:34.564130 | orchestrator | 2026-02-23 20:37:34 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:34.564210 | orchestrator | 2026-02-23 20:37:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:37.611881 | orchestrator | 2026-02-23 20:37:37 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:37.614409 | orchestrator | 2026-02-23 20:37:37 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:37.614492 | orchestrator | 2026-02-23 20:37:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:40.663867 | orchestrator | 2026-02-23 20:37:40 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:40.665184 | orchestrator | 2026-02-23 20:37:40 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:40.665220 | orchestrator | 2026-02-23 20:37:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:43.703687 | orchestrator | 2026-02-23 20:37:43 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:43.705617 | orchestrator | 2026-02-23 20:37:43 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:43.705941 | orchestrator | 2026-02-23 20:37:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:46.749230 | orchestrator | 2026-02-23 20:37:46 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state STARTED 2026-02-23 20:37:46.751227 | orchestrator | 2026-02-23 20:37:46 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:46.751285 | orchestrator | 2026-02-23 20:37:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:49.805416 | orchestrator | 2026-02-23 20:37:49 | INFO  | Task c98d1929-75b5-46f5-aac1-2d4282cd9bf7 is in state SUCCESS 2026-02-23 20:37:49.805581 | orchestrator | 2026-02-23 20:37:49.807617 | orchestrator | 2026-02-23 20:37:49.807698 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:37:49.807712 | orchestrator | 2026-02-23 20:37:49.807722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:37:49.807732 | orchestrator | Monday 23 February 2026 20:31:49 +0000 (0:00:00.367) 0:00:00.367 ******* 2026-02-23 20:37:49.807740 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.807750 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.807758 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.807765 | orchestrator | 2026-02-23 20:37:49.807773 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:37:49.808046 | orchestrator | Monday 23 February 2026 20:31:49 +0000 (0:00:00.355) 0:00:00.722 ******* 2026-02-23 20:37:49.808071 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-23 20:37:49.808081 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-23 20:37:49.808089 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-23 20:37:49.808098 | orchestrator | 2026-02-23 20:37:49.808129 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-23 20:37:49.808138 | orchestrator | 2026-02-23 20:37:49.808147 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-23 20:37:49.808156 | orchestrator | Monday 23 February 2026 20:31:50 +0000 (0:00:00.529) 0:00:01.252 ******* 2026-02-23 20:37:49.808166 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.808174 | orchestrator | 2026-02-23 20:37:49.808183 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-23 20:37:49.808192 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.637) 0:00:01.890 ******* 2026-02-23 20:37:49.808201 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.808211 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.808221 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.808230 | orchestrator | 2026-02-23 20:37:49.808239 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-23 20:37:49.808249 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:00.724) 0:00:02.615 ******* 2026-02-23 20:37:49.808257 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.808266 | orchestrator | 2026-02-23 20:37:49.808274 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-23 20:37:49.808283 | orchestrator | Monday 23 February 2026 20:31:52 +0000 (0:00:00.904) 0:00:03.520 ******* 2026-02-23 20:37:49.808292 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.808300 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.808309 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.808353 | orchestrator | 2026-02-23 20:37:49.808364 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-23 20:37:49.808398 | orchestrator | Monday 23 February 2026 20:31:53 +0000 (0:00:00.805) 0:00:04.325 ******* 2026-02-23 20:37:49.808408 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-23 20:37:49.808417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-23 20:37:49.808426 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-23 20:37:49.808435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-23 20:37:49.808443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-23 20:37:49.808465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-23 20:37:49.808476 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-23 20:37:49.808485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-23 20:37:49.808494 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-23 20:37:49.808503 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-23 20:37:49.809289 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-23 20:37:49.809297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-23 20:37:49.809303 | orchestrator | 2026-02-23 20:37:49.809310 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-23 20:37:49.809316 | orchestrator | Monday 23 February 2026 20:31:58 +0000 (0:00:04.882) 0:00:09.207 ******* 2026-02-23 20:37:49.809322 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-23 20:37:49.809472 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-23 20:37:49.809485 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-23 20:37:49.809491 | orchestrator | 2026-02-23 20:37:49.809497 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-23 20:37:49.809504 | orchestrator | Monday 23 February 2026 20:32:00 +0000 (0:00:01.582) 0:00:10.790 ******* 2026-02-23 20:37:49.809509 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-23 20:37:49.809516 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-23 20:37:49.809521 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-23 20:37:49.809527 | orchestrator | 2026-02-23 20:37:49.809533 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-23 20:37:49.809539 | orchestrator | Monday 23 February 2026 20:32:02 +0000 (0:00:02.423) 0:00:13.214 ******* 2026-02-23 20:37:49.809544 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-23 20:37:49.809550 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.809571 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-23 20:37:49.809578 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.809583 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-23 20:37:49.809589 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.809595 | orchestrator | 2026-02-23 20:37:49.809601 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-23 20:37:49.809607 | orchestrator | Monday 23 February 2026 20:32:04 +0000 (0:00:01.605) 0:00:14.819 ******* 2026-02-23 20:37:49.809615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.809638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.809645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.809658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.809665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.809677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.809685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.809692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.809701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.809707 | orchestrator | 2026-02-23 20:37:49.809713 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-23 20:37:49.809718 | orchestrator | Monday 23 February 2026 20:32:06 +0000 (0:00:02.297) 0:00:17.117 ******* 2026-02-23 20:37:49.809724 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.809730 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.810143 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.810153 | orchestrator | 2026-02-23 20:37:49.810159 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-23 20:37:49.810165 | orchestrator | Monday 23 February 2026 20:32:08 +0000 (0:00:01.767) 0:00:18.884 ******* 2026-02-23 20:37:49.810170 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-23 20:37:49.810176 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-23 20:37:49.810182 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-23 20:37:49.810187 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-23 20:37:49.810193 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-23 20:37:49.810198 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-23 20:37:49.810204 | orchestrator | 2026-02-23 20:37:49.810210 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-23 20:37:49.810215 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:02.655) 0:00:21.540 ******* 2026-02-23 20:37:49.810226 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.810232 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.810237 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.810242 | orchestrator | 2026-02-23 20:37:49.810248 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-23 20:37:49.810254 | orchestrator | Monday 23 February 2026 20:32:12 +0000 (0:00:01.478) 0:00:23.018 ******* 2026-02-23 20:37:49.810259 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.810264 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.810270 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.810275 | orchestrator | 2026-02-23 20:37:49.810280 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-23 20:37:49.810286 | orchestrator | Monday 23 February 2026 20:32:14 +0000 (0:00:02.396) 0:00:25.415 ******* 2026-02-23 20:37:49.810295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.810423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.810685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.810708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-23 20:37:49.810715 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.810720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.810731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.810737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.810742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-23 20:37:49.810763 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.810789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.810796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.810801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.810806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-23 20:37:49.810811 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.810816 | orchestrator | 2026-02-23 20:37:49.810822 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-23 20:37:49.810831 | orchestrator | Monday 23 February 2026 20:32:15 +0000 (0:00:01.033) 0:00:26.449 ******* 2026-02-23 20:37:49.810836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.810882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-23 20:37:49.810891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.810915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-23 20:37:49.810939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.810955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f', '__omit_place_holder__1f7f75c9414b682618630359d68bb629da18114f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-23 20:37:49.810962 | orchestrator | 2026-02-23 20:37:49.810970 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-23 20:37:49.810978 | orchestrator | Monday 23 February 2026 20:32:19 +0000 (0:00:03.531) 0:00:29.980 ******* 2026-02-23 20:37:49.810988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.810996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.811076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.811089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.811094 | orchestrator | 2026-02-23 20:37:49.811099 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-23 20:37:49.811104 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:03.340) 0:00:33.321 ******* 2026-02-23 20:37:49.811109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-23 20:37:49.811115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-23 20:37:49.811120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-23 20:37:49.811125 | orchestrator | 2026-02-23 20:37:49.811130 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-23 20:37:49.811135 | orchestrator | Monday 23 February 2026 20:32:26 +0000 (0:00:03.435) 0:00:36.757 ******* 2026-02-23 20:37:49.811141 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-23 20:37:49.811146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-23 20:37:49.811151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-23 20:37:49.811156 | orchestrator | 2026-02-23 20:37:49.811174 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-23 20:37:49.811180 | orchestrator | Monday 23 February 2026 20:32:31 +0000 (0:00:05.128) 0:00:41.886 ******* 2026-02-23 20:37:49.811185 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.811190 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.811195 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.811200 | orchestrator | 2026-02-23 20:37:49.811205 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-23 20:37:49.811210 | orchestrator | Monday 23 February 2026 20:32:31 +0000 (0:00:00.710) 0:00:42.596 ******* 2026-02-23 20:37:49.811279 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-23 20:37:49.811287 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-23 20:37:49.811294 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-23 20:37:49.811303 | orchestrator | 2026-02-23 20:37:49.811311 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-23 20:37:49.811318 | orchestrator | Monday 23 February 2026 20:32:34 +0000 (0:00:02.724) 0:00:45.321 ******* 2026-02-23 20:37:49.811323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-23 20:37:49.811384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-23 20:37:49.811390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-23 20:37:49.811396 | orchestrator | 2026-02-23 20:37:49.811402 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-23 20:37:49.811408 | orchestrator | Monday 23 February 2026 20:32:38 +0000 (0:00:03.757) 0:00:49.079 ******* 2026-02-23 20:37:49.811414 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-23 20:37:49.811420 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-23 20:37:49.811431 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-23 20:37:49.811437 | orchestrator | 2026-02-23 20:37:49.811442 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-23 20:37:49.811448 | orchestrator | Monday 23 February 2026 20:32:40 +0000 (0:00:01.965) 0:00:51.044 ******* 2026-02-23 20:37:49.811453 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-23 20:37:49.811459 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-23 20:37:49.811464 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-23 20:37:49.811470 | orchestrator | 2026-02-23 20:37:49.811475 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-23 20:37:49.811797 | orchestrator | Monday 23 February 2026 20:32:42 +0000 (0:00:01.741) 0:00:52.785 ******* 2026-02-23 20:37:49.811821 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.811828 | orchestrator | 2026-02-23 20:37:49.811836 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-23 20:37:49.811850 | orchestrator | Monday 23 February 2026 20:32:42 +0000 (0:00:00.668) 0:00:53.453 ******* 2026-02-23 20:37:49.811859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.811949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.811958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.811966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.811973 | orchestrator | 2026-02-23 20:37:49.811979 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-23 20:37:49.811983 | orchestrator | Monday 23 February 2026 20:32:46 +0000 (0:00:03.313) 0:00:56.767 ******* 2026-02-23 20:37:49.812005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812026 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.812032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812050 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.812056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812088 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.812093 | orchestrator | 2026-02-23 20:37:49.812099 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-23 20:37:49.812104 | orchestrator | Monday 23 February 2026 20:32:46 +0000 (0:00:00.546) 0:00:57.314 ******* 2026-02-23 20:37:49.812109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812183 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.812188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812494 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.812499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812518 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.812523 | orchestrator | 2026-02-23 20:37:49.812529 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-23 20:37:49.812534 | orchestrator | Monday 23 February 2026 20:32:47 +0000 (0:00:00.868) 0:00:58.183 ******* 2026-02-23 20:37:49.812539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812579 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.812587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812609 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.812620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812670 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.812678 | orchestrator | 2026-02-23 20:37:49.812685 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-23 20:37:49.812720 | orchestrator | Monday 23 February 2026 20:32:48 +0000 (0:00:00.672) 0:00:58.855 ******* 2026-02-23 20:37:49.812726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812742 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.812750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812773 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.812835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812852 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.812857 | orchestrator | 2026-02-23 20:37:49.812862 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-23 20:37:49.812867 | orchestrator | Monday 23 February 2026 20:32:48 +0000 (0:00:00.861) 0:00:59.716 ******* 2026-02-23 20:37:49.812872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.812886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.812895 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.812912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.812918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813207 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.813215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813240 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.813247 | orchestrator | 2026-02-23 20:37:49.813254 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-23 20:37:49.813262 | orchestrator | Monday 23 February 2026 20:32:49 +0000 (0:00:00.775) 0:01:00.491 ******* 2026-02-23 20:37:49.813270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813372 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.813380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813415 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.813422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813466 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.813473 | orchestrator | 2026-02-23 20:37:49.813482 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-23 20:37:49.813489 | orchestrator | Monday 23 February 2026 20:32:51 +0000 (0:00:01.655) 0:01:02.147 ******* 2026-02-23 20:37:49.813497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813522 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.813527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813558 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.813563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813582 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.813587 | orchestrator | 2026-02-23 20:37:49.813592 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-23 20:37:49.813597 | orchestrator | Monday 23 February 2026 20:32:52 +0000 (0:00:01.493) 0:01:03.640 ******* 2026-02-23 20:37:49.813605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813621 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.813638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813658 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.813663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-23 20:37:49.813672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-23 20:37:49.813677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-23 20:37:49.813682 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.813687 | orchestrator | 2026-02-23 20:37:49.813692 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-23 20:37:49.813697 | orchestrator | Monday 23 February 2026 20:32:53 +0000 (0:00:00.814) 0:01:04.455 ******* 2026-02-23 20:37:49.813701 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-23 20:37:49.813707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-23 20:37:49.814072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-23 20:37:49.814089 | orchestrator | 2026-02-23 20:37:49.814094 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-23 20:37:49.814099 | orchestrator | Monday 23 February 2026 20:32:55 +0000 (0:00:01.452) 0:01:05.907 ******* 2026-02-23 20:37:49.814104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-23 20:37:49.814109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-23 20:37:49.814114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-23 20:37:49.814119 | orchestrator | 2026-02-23 20:37:49.814124 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-23 20:37:49.814129 | orchestrator | Monday 23 February 2026 20:32:56 +0000 (0:00:01.295) 0:01:07.202 ******* 2026-02-23 20:37:49.814133 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-23 20:37:49.814138 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-23 20:37:49.814143 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-23 20:37:49.814155 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-23 20:37:49.814160 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.814165 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-23 20:37:49.814170 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.814175 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-23 20:37:49.814180 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.814185 | orchestrator | 2026-02-23 20:37:49.814190 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-23 20:37:49.814195 | orchestrator | Monday 23 February 2026 20:32:57 +0000 (0:00:00.700) 0:01:07.902 ******* 2026-02-23 20:37:49.814200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.814209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.814214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-23 20:37:49.814258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.814265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.814280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-23 20:37:49.814289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.814296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.814305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-23 20:37:49.814310 | orchestrator | 2026-02-23 20:37:49.814315 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-23 20:37:49.814320 | orchestrator | Monday 23 February 2026 20:32:59 +0000 (0:00:02.162) 0:01:10.065 ******* 2026-02-23 20:37:49.814343 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.814349 | orchestrator | 2026-02-23 20:37:49.814354 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-23 20:37:49.814359 | orchestrator | Monday 23 February 2026 20:32:59 +0000 (0:00:00.527) 0:01:10.592 ******* 2026-02-23 20:37:49.814365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-23 20:37:49.814422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.814441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-23 20:37:49.814449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.814469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-23 20:37:49.814529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.814537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814556 | orchestrator | 2026-02-23 20:37:49.814564 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-23 20:37:49.814572 | orchestrator | Monday 23 February 2026 20:33:03 +0000 (0:00:03.871) 0:01:14.463 ******* 2026-02-23 20:37:49.814580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-23 20:37:49.814608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-23 20:37:49.814622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.814627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.814641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814651 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.814657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814666 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.814685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-23 20:37:49.814691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.814964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.814998 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.815006 | orchestrator | 2026-02-23 20:37:49.815014 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-23 20:37:49.815023 | orchestrator | Monday 23 February 2026 20:33:04 +0000 (0:00:00.894) 0:01:15.357 ******* 2026-02-23 20:37:49.815032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-23 20:37:49.815043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-23 20:37:49.815051 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.815059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-23 20:37:49.815077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-23 20:37:49.815086 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.815094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-23 20:37:49.815101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-23 20:37:49.815110 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.815115 | orchestrator | 2026-02-23 20:37:49.815181 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-23 20:37:49.815188 | orchestrator | Monday 23 February 2026 20:33:05 +0000 (0:00:01.006) 0:01:16.363 ******* 2026-02-23 20:37:49.815193 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.815200 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.815208 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.815219 | orchestrator | 2026-02-23 20:37:49.815229 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-23 20:37:49.815237 | orchestrator | Monday 23 February 2026 20:33:06 +0000 (0:00:01.243) 0:01:17.607 ******* 2026-02-23 20:37:49.815245 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.815254 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.815261 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.815269 | orchestrator | 2026-02-23 20:37:49.815278 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-23 20:37:49.815286 | orchestrator | Monday 23 February 2026 20:33:08 +0000 (0:00:01.951) 0:01:19.559 ******* 2026-02-23 20:37:49.815374 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.815382 | orchestrator | 2026-02-23 20:37:49.815387 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-23 20:37:49.815392 | orchestrator | Monday 23 February 2026 20:33:09 +0000 (0:00:00.705) 0:01:20.264 ******* 2026-02-23 20:37:49.815398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.815410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.815634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.815663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815680 | orchestrator | 2026-02-23 20:37:49.815686 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-23 20:37:49.815691 | orchestrator | Monday 23 February 2026 20:33:13 +0000 (0:00:03.510) 0:01:23.775 ******* 2026-02-23 20:37:49.815736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.815744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815758 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.815776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.815792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815838 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.815910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.815922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.815939 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.815957 | orchestrator | 2026-02-23 20:37:49.815965 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-23 20:37:49.815974 | orchestrator | Monday 23 February 2026 20:33:13 +0000 (0:00:00.568) 0:01:24.344 ******* 2026-02-23 20:37:49.815983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-23 20:37:49.815997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-23 20:37:49.816004 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.816009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-23 20:37:49.816014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-23 20:37:49.816041 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.816046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-23 20:37:49.816051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-23 20:37:49.816056 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.816061 | orchestrator | 2026-02-23 20:37:49.816066 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-23 20:37:49.816071 | orchestrator | Monday 23 February 2026 20:33:14 +0000 (0:00:01.046) 0:01:25.391 ******* 2026-02-23 20:37:49.816076 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.816081 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.816085 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.816131 | orchestrator | 2026-02-23 20:37:49.816138 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-23 20:37:49.816143 | orchestrator | Monday 23 February 2026 20:33:16 +0000 (0:00:01.447) 0:01:26.838 ******* 2026-02-23 20:37:49.816148 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.816222 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.816235 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.816243 | orchestrator | 2026-02-23 20:37:49.816310 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-23 20:37:49.816321 | orchestrator | Monday 23 February 2026 20:33:18 +0000 (0:00:02.152) 0:01:28.990 ******* 2026-02-23 20:37:49.816355 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.816363 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.816370 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.816377 | orchestrator | 2026-02-23 20:37:49.816385 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-23 20:37:49.816393 | orchestrator | Monday 23 February 2026 20:33:18 +0000 (0:00:00.258) 0:01:29.249 ******* 2026-02-23 20:37:49.816401 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.816408 | orchestrator | 2026-02-23 20:37:49.816416 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-23 20:37:49.816424 | orchestrator | Monday 23 February 2026 20:33:19 +0000 (0:00:00.873) 0:01:30.123 ******* 2026-02-23 20:37:49.816434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-23 20:37:49.816453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-23 20:37:49.816469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-23 20:37:49.816477 | orchestrator | 2026-02-23 20:37:49.816953 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-23 20:37:49.816969 | orchestrator | Monday 23 February 2026 20:33:22 +0000 (0:00:02.800) 0:01:32.923 ******* 2026-02-23 20:37:49.817027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-23 20:37:49.817036 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.817041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-23 20:37:49.817055 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.817060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-23 20:37:49.817066 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.817070 | orchestrator | 2026-02-23 20:37:49.817075 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-23 20:37:49.817080 | orchestrator | Monday 23 February 2026 20:33:23 +0000 (0:00:01.514) 0:01:34.438 ******* 2026-02-23 20:37:49.817086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-23 20:37:49.817120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-23 20:37:49.817127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-23 20:37:49.817134 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.817139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-23 20:37:49.817145 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.817185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-23 20:37:49.817430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-23 20:37:49.817452 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.817457 | orchestrator | 2026-02-23 20:37:49.817462 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-23 20:37:49.817467 | orchestrator | Monday 23 February 2026 20:33:25 +0000 (0:00:01.606) 0:01:36.044 ******* 2026-02-23 20:37:49.817577 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.817583 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.817588 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.817593 | orchestrator | 2026-02-23 20:37:49.817598 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-23 20:37:49.817603 | orchestrator | Monday 23 February 2026 20:33:25 +0000 (0:00:00.551) 0:01:36.595 ******* 2026-02-23 20:37:49.817608 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.817614 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.817622 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.817631 | orchestrator | 2026-02-23 20:37:49.817636 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-23 20:37:49.817641 | orchestrator | Monday 23 February 2026 20:33:26 +0000 (0:00:01.138) 0:01:37.734 ******* 2026-02-23 20:37:49.817646 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.817682 | orchestrator | 2026-02-23 20:37:49.817688 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-23 20:37:49.817693 | orchestrator | Monday 23 February 2026 20:33:27 +0000 (0:00:00.710) 0:01:38.445 ******* 2026-02-23 20:37:49.817699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.817711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.817822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.817831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817892 | orchestrator | 2026-02-23 20:37:49.817897 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-23 20:37:49.817902 | orchestrator | Monday 23 February 2026 20:33:31 +0000 (0:00:03.593) 0:01:42.038 ******* 2026-02-23 20:37:49.817911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.817916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.817997 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.818002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.818008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.818246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.818377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.818388 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.818443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.818451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.818457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.818470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.818506 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.818512 | orchestrator | 2026-02-23 20:37:49.818524 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-23 20:37:49.818529 | orchestrator | Monday 23 February 2026 20:33:32 +0000 (0:00:00.803) 0:01:42.842 ******* 2026-02-23 20:37:49.818535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-23 20:37:49.818542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-23 20:37:49.818547 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.818552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-23 20:37:49.818557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-23 20:37:49.818562 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.818567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-23 20:37:49.818993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-23 20:37:49.819010 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.819042 | orchestrator | 2026-02-23 20:37:49.819048 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-23 20:37:49.819053 | orchestrator | Monday 23 February 2026 20:33:32 +0000 (0:00:00.824) 0:01:43.666 ******* 2026-02-23 20:37:49.819392 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.819400 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.819406 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.819411 | orchestrator | 2026-02-23 20:37:49.819419 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-23 20:37:49.819428 | orchestrator | Monday 23 February 2026 20:33:34 +0000 (0:00:01.262) 0:01:44.929 ******* 2026-02-23 20:37:49.819436 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.819444 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.819451 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.819459 | orchestrator | 2026-02-23 20:37:49.819467 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-23 20:37:49.819475 | orchestrator | Monday 23 February 2026 20:33:36 +0000 (0:00:02.046) 0:01:46.975 ******* 2026-02-23 20:37:49.819483 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.819490 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.819498 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.819506 | orchestrator | 2026-02-23 20:37:49.819514 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-23 20:37:49.819521 | orchestrator | Monday 23 February 2026 20:33:36 +0000 (0:00:00.418) 0:01:47.394 ******* 2026-02-23 20:37:49.819529 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.819537 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.819544 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.819551 | orchestrator | 2026-02-23 20:37:49.819559 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-23 20:37:49.819568 | orchestrator | Monday 23 February 2026 20:33:36 +0000 (0:00:00.274) 0:01:47.668 ******* 2026-02-23 20:37:49.819575 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.819601 | orchestrator | 2026-02-23 20:37:49.819706 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-23 20:37:49.819713 | orchestrator | Monday 23 February 2026 20:33:37 +0000 (0:00:00.804) 0:01:48.472 ******* 2026-02-23 20:37:49.819725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:37:49.819732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:37:49.819739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:37:49.820808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:37:49.820888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.820942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:37:49.820950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:37:49.821011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821067 | orchestrator | 2026-02-23 20:37:49.821076 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-23 20:37:49.821085 | orchestrator | Monday 23 February 2026 20:33:44 +0000 (0:00:06.441) 0:01:54.914 ******* 2026-02-23 20:37:49.821094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:37:49.821161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:37:49.821173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:37:49.821190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:37:49.821203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821371 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.821380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821395 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.821463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:37:49.821482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:37:49.821491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.821626 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.821634 | orchestrator | 2026-02-23 20:37:49.821643 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-23 20:37:49.821651 | orchestrator | Monday 23 February 2026 20:33:45 +0000 (0:00:01.134) 0:01:56.048 ******* 2026-02-23 20:37:49.821657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-23 20:37:49.821663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-23 20:37:49.821670 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.821675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-23 20:37:49.821680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-23 20:37:49.821685 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.821690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-23 20:37:49.821695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-23 20:37:49.821701 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.821706 | orchestrator | 2026-02-23 20:37:49.821711 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-23 20:37:49.821716 | orchestrator | Monday 23 February 2026 20:33:46 +0000 (0:00:01.289) 0:01:57.338 ******* 2026-02-23 20:37:49.821722 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.821727 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.821732 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.821737 | orchestrator | 2026-02-23 20:37:49.821742 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-23 20:37:49.821747 | orchestrator | Monday 23 February 2026 20:33:48 +0000 (0:00:01.651) 0:01:58.990 ******* 2026-02-23 20:37:49.821752 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.821757 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.821763 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.821768 | orchestrator | 2026-02-23 20:37:49.821810 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-23 20:37:49.821816 | orchestrator | Monday 23 February 2026 20:33:50 +0000 (0:00:01.940) 0:02:00.930 ******* 2026-02-23 20:37:49.821821 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.821826 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.821835 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.821840 | orchestrator | 2026-02-23 20:37:49.821845 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-23 20:37:49.821850 | orchestrator | Monday 23 February 2026 20:33:50 +0000 (0:00:00.463) 0:02:01.394 ******* 2026-02-23 20:37:49.821855 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.821861 | orchestrator | 2026-02-23 20:37:49.821866 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-23 20:37:49.821871 | orchestrator | Monday 23 February 2026 20:33:51 +0000 (0:00:00.743) 0:02:02.137 ******* 2026-02-23 20:37:49.821919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:37:49.821933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.821943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:37:49.821994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.822010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:37:49.822118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.822132 | orchestrator | 2026-02-23 20:37:49.822141 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-23 20:37:49.822149 | orchestrator | Monday 23 February 2026 20:33:56 +0000 (0:00:05.233) 0:02:07.371 ******* 2026-02-23 20:37:49.822162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:37:49.822245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.822256 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.822266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:37:49.822324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.822416 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.822427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:37:49.822484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.822498 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.822503 | orchestrator | 2026-02-23 20:37:49.822509 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-23 20:37:49.822514 | orchestrator | Monday 23 February 2026 20:33:59 +0000 (0:00:03.006) 0:02:10.377 ******* 2026-02-23 20:37:49.822520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-23 20:37:49.822527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-23 20:37:49.822532 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.822537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-23 20:37:49.822543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-23 20:37:49.822548 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.822560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-23 20:37:49.822565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-23 20:37:49.822570 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.822576 | orchestrator | 2026-02-23 20:37:49.822581 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-23 20:37:49.822586 | orchestrator | Monday 23 February 2026 20:34:03 +0000 (0:00:03.934) 0:02:14.312 ******* 2026-02-23 20:37:49.822591 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.822596 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.822612 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.822617 | orchestrator | 2026-02-23 20:37:49.822622 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-23 20:37:49.822627 | orchestrator | Monday 23 February 2026 20:34:04 +0000 (0:00:01.253) 0:02:15.566 ******* 2026-02-23 20:37:49.822632 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.822637 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.822642 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.822647 | orchestrator | 2026-02-23 20:37:49.822652 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-23 20:37:49.822693 | orchestrator | Monday 23 February 2026 20:34:06 +0000 (0:00:01.887) 0:02:17.453 ******* 2026-02-23 20:37:49.822700 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.822705 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.822711 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.822716 | orchestrator | 2026-02-23 20:37:49.822721 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-23 20:37:49.822726 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:00.344) 0:02:17.798 ******* 2026-02-23 20:37:49.822731 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.822739 | orchestrator | 2026-02-23 20:37:49.822748 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-23 20:37:49.822755 | orchestrator | Monday 23 February 2026 20:34:07 +0000 (0:00:00.748) 0:02:18.547 ******* 2026-02-23 20:37:49.822766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:37:49.822777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:37:49.822799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:37:49.822807 | orchestrator | 2026-02-23 20:37:49.822815 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-23 20:37:49.822823 | orchestrator | Monday 23 February 2026 20:34:10 +0000 (0:00:03.176) 0:02:21.723 ******* 2026-02-23 20:37:49.822831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:37:49.822839 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.822897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:37:49.822908 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.822917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:37:49.822925 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.822933 | orchestrator | 2026-02-23 20:37:49.822941 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-23 20:37:49.822949 | orchestrator | Monday 23 February 2026 20:34:11 +0000 (0:00:00.502) 0:02:22.225 ******* 2026-02-23 20:37:49.822959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-23 20:37:49.822988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-23 20:37:49.822998 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.823006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-23 20:37:49.823019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-23 20:37:49.823030 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.823038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-23 20:37:49.823046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-23 20:37:49.823053 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.823061 | orchestrator | 2026-02-23 20:37:49.823069 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-23 20:37:49.823077 | orchestrator | Monday 23 February 2026 20:34:12 +0000 (0:00:00.587) 0:02:22.813 ******* 2026-02-23 20:37:49.823084 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.823097 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.823106 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.823114 | orchestrator | 2026-02-23 20:37:49.823122 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-23 20:37:49.823130 | orchestrator | Monday 23 February 2026 20:34:13 +0000 (0:00:01.276) 0:02:24.090 ******* 2026-02-23 20:37:49.823139 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.823147 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.823155 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.823164 | orchestrator | 2026-02-23 20:37:49.823169 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-23 20:37:49.823174 | orchestrator | Monday 23 February 2026 20:34:15 +0000 (0:00:02.086) 0:02:26.176 ******* 2026-02-23 20:37:49.823179 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.823184 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.823189 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.823194 | orchestrator | 2026-02-23 20:37:49.823200 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-23 20:37:49.823205 | orchestrator | Monday 23 February 2026 20:34:15 +0000 (0:00:00.472) 0:02:26.648 ******* 2026-02-23 20:37:49.823210 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.823215 | orchestrator | 2026-02-23 20:37:49.823220 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-23 20:37:49.823225 | orchestrator | Monday 23 February 2026 20:34:16 +0000 (0:00:00.856) 0:02:27.505 ******* 2026-02-23 20:37:49.823286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:37:49.823307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:37:49.823382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:37:49.823397 | orchestrator | 2026-02-23 20:37:49.823402 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-23 20:37:49.823407 | orchestrator | Monday 23 February 2026 20:34:20 +0000 (0:00:03.772) 0:02:31.278 ******* 2026-02-23 20:37:49.823442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:37:49.823454 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.823460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:37:49.823466 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.823508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:37:49.823519 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.823525 | orchestrator | 2026-02-23 20:37:49.823530 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-23 20:37:49.823535 | orchestrator | Monday 23 February 2026 20:34:21 +0000 (0:00:00.941) 0:02:32.220 ******* 2026-02-23 20:37:49.823541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-23 20:37:49.823548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-23 20:37:49.823554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-23 20:37:49.823561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-23 20:37:49.823567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-23 20:37:49.823575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-23 20:37:49.823581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-23 20:37:49.823601 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.823607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-23 20:37:49.823612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-23 20:37:49.823618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-23 20:37:49.823627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-23 20:37:49.823671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-23 20:37:49.823679 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.823685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-23 20:37:49.823690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-23 20:37:49.823696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-23 20:37:49.823701 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.823707 | orchestrator | 2026-02-23 20:37:49.823712 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-23 20:37:49.823718 | orchestrator | Monday 23 February 2026 20:34:22 +0000 (0:00:00.988) 0:02:33.208 ******* 2026-02-23 20:37:49.823723 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.823728 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.823734 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.823739 | orchestrator | 2026-02-23 20:37:49.823744 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-23 20:37:49.823750 | orchestrator | Monday 23 February 2026 20:34:23 +0000 (0:00:01.328) 0:02:34.537 ******* 2026-02-23 20:37:49.823755 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.823760 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.823766 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.823771 | orchestrator | 2026-02-23 20:37:49.823777 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-23 20:37:49.823790 | orchestrator | Monday 23 February 2026 20:34:25 +0000 (0:00:01.770) 0:02:36.307 ******* 2026-02-23 20:37:49.823796 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.823801 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.823806 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.823811 | orchestrator | 2026-02-23 20:37:49.823817 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-23 20:37:49.823822 | orchestrator | Monday 23 February 2026 20:34:25 +0000 (0:00:00.271) 0:02:36.578 ******* 2026-02-23 20:37:49.823827 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.823832 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.823837 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.823842 | orchestrator | 2026-02-23 20:37:49.823847 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-23 20:37:49.823852 | orchestrator | Monday 23 February 2026 20:34:26 +0000 (0:00:00.433) 0:02:37.012 ******* 2026-02-23 20:37:49.823858 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.823863 | orchestrator | 2026-02-23 20:37:49.823868 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-23 20:37:49.823878 | orchestrator | Monday 23 February 2026 20:34:27 +0000 (0:00:00.860) 0:02:37.872 ******* 2026-02-23 20:37:49.823902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:37:49.823953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:37:49.823962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:37:49.823968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:37:49.823974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:37:49.823986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:37:49.823993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:37:49.824036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:37:49.824043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:37:49.824049 | orchestrator | 2026-02-23 20:37:49.824054 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-23 20:37:49.824059 | orchestrator | Monday 23 February 2026 20:34:30 +0000 (0:00:03.542) 0:02:41.415 ******* 2026-02-23 20:37:49.824065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:37:49.824077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:37:49.824083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:37:49.824088 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.824128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:37:49.824137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:37:49.824142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:37:49.824151 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.824166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:37:49.824182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:37:49.824191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:37:49.824200 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.824208 | orchestrator | 2026-02-23 20:37:49.824216 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-23 20:37:49.824272 | orchestrator | Monday 23 February 2026 20:34:31 +0000 (0:00:00.548) 0:02:41.964 ******* 2026-02-23 20:37:49.824284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-23 20:37:49.824294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-23 20:37:49.824302 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.824309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-23 20:37:49.824317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-23 20:37:49.824355 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.824364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-23 20:37:49.824381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-23 20:37:49.824390 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.824398 | orchestrator | 2026-02-23 20:37:49.824406 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-23 20:37:49.824414 | orchestrator | Monday 23 February 2026 20:34:31 +0000 (0:00:00.727) 0:02:42.691 ******* 2026-02-23 20:37:49.824422 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.824430 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.824439 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.824447 | orchestrator | 2026-02-23 20:37:49.824455 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-23 20:37:49.824464 | orchestrator | Monday 23 February 2026 20:34:32 +0000 (0:00:01.037) 0:02:43.728 ******* 2026-02-23 20:37:49.824472 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.824480 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.824489 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.824498 | orchestrator | 2026-02-23 20:37:49.824507 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-23 20:37:49.824521 | orchestrator | Monday 23 February 2026 20:34:34 +0000 (0:00:01.831) 0:02:45.559 ******* 2026-02-23 20:37:49.824529 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.824538 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.824545 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.824550 | orchestrator | 2026-02-23 20:37:49.824555 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-23 20:37:49.824570 | orchestrator | Monday 23 February 2026 20:34:35 +0000 (0:00:00.449) 0:02:46.009 ******* 2026-02-23 20:37:49.824576 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.824581 | orchestrator | 2026-02-23 20:37:49.824586 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-23 20:37:49.824591 | orchestrator | Monday 23 February 2026 20:34:36 +0000 (0:00:00.921) 0:02:46.930 ******* 2026-02-23 20:37:49.824597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:37:49.824661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.824676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:37:49.824682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.824691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:37:49.824697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.824702 | orchestrator | 2026-02-23 20:37:49.824707 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-23 20:37:49.824713 | orchestrator | Monday 23 February 2026 20:34:39 +0000 (0:00:03.113) 0:02:50.043 ******* 2026-02-23 20:37:49.824756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:37:49.824772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.824777 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.824783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:37:49.824791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.824796 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.824837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:37:49.824844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.824854 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.824859 | orchestrator | 2026-02-23 20:37:49.824864 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-23 20:37:49.824869 | orchestrator | Monday 23 February 2026 20:34:40 +0000 (0:00:00.861) 0:02:50.905 ******* 2026-02-23 20:37:49.824875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-23 20:37:49.824881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-23 20:37:49.824887 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.824892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-23 20:37:49.824897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-23 20:37:49.824902 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.824908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-23 20:37:49.824913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-23 20:37:49.824918 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.824923 | orchestrator | 2026-02-23 20:37:49.824928 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-23 20:37:49.824933 | orchestrator | Monday 23 February 2026 20:34:41 +0000 (0:00:01.075) 0:02:51.980 ******* 2026-02-23 20:37:49.824942 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.824947 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.824952 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.824957 | orchestrator | 2026-02-23 20:37:49.824962 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-23 20:37:49.824968 | orchestrator | Monday 23 February 2026 20:34:42 +0000 (0:00:01.364) 0:02:53.344 ******* 2026-02-23 20:37:49.824973 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.824978 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.824983 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.824988 | orchestrator | 2026-02-23 20:37:49.824994 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-23 20:37:49.824999 | orchestrator | Monday 23 February 2026 20:34:44 +0000 (0:00:01.909) 0:02:55.253 ******* 2026-02-23 20:37:49.825004 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.825009 | orchestrator | 2026-02-23 20:37:49.825014 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-23 20:37:49.825029 | orchestrator | Monday 23 February 2026 20:34:45 +0000 (0:00:01.152) 0:02:56.406 ******* 2026-02-23 20:37:49.825074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-23 20:37:49.825090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-23 20:37:49.825125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-23 20:37:49.825213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825250 | orchestrator | 2026-02-23 20:37:49.825262 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-23 20:37:49.825273 | orchestrator | Monday 23 February 2026 20:34:48 +0000 (0:00:03.317) 0:02:59.723 ******* 2026-02-23 20:37:49.825408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-23 20:37:49.825426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-23 20:37:49.825486 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.825495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825586 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.825596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-23 20:37:49.825605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.825658 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.825664 | orchestrator | 2026-02-23 20:37:49.825669 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-23 20:37:49.825674 | orchestrator | Monday 23 February 2026 20:34:49 +0000 (0:00:00.758) 0:03:00.481 ******* 2026-02-23 20:37:49.825679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-23 20:37:49.825685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-23 20:37:49.825691 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.825696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-23 20:37:49.825747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-23 20:37:49.825755 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.825761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-23 20:37:49.825766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-23 20:37:49.825771 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.825777 | orchestrator | 2026-02-23 20:37:49.825782 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-23 20:37:49.825787 | orchestrator | Monday 23 February 2026 20:34:50 +0000 (0:00:01.264) 0:03:01.746 ******* 2026-02-23 20:37:49.825792 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.825797 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.825802 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.825807 | orchestrator | 2026-02-23 20:37:49.825813 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-23 20:37:49.825818 | orchestrator | Monday 23 February 2026 20:34:52 +0000 (0:00:01.309) 0:03:03.055 ******* 2026-02-23 20:37:49.825823 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.825828 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.825833 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.825838 | orchestrator | 2026-02-23 20:37:49.825843 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-23 20:37:49.825848 | orchestrator | Monday 23 February 2026 20:34:54 +0000 (0:00:01.845) 0:03:04.901 ******* 2026-02-23 20:37:49.825853 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.825858 | orchestrator | 2026-02-23 20:37:49.825864 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-23 20:37:49.825869 | orchestrator | Monday 23 February 2026 20:34:55 +0000 (0:00:01.330) 0:03:06.231 ******* 2026-02-23 20:37:49.825878 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-23 20:37:49.825884 | orchestrator | 2026-02-23 20:37:49.825889 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-23 20:37:49.825894 | orchestrator | Monday 23 February 2026 20:34:58 +0000 (0:00:02.858) 0:03:09.090 ******* 2026-02-23 20:37:49.825903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:37:49.825947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-23 20:37:49.825955 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.825960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:37:49.825973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-23 20:37:49.825978 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:37:49.826073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-23 20:37:49.826078 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826083 | orchestrator | 2026-02-23 20:37:49.826088 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-23 20:37:49.826098 | orchestrator | Monday 23 February 2026 20:35:00 +0000 (0:00:01.860) 0:03:10.950 ******* 2026-02-23 20:37:49.826106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:37:49.826150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:37:49.826157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-23 20:37:49.826167 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-23 20:37:49.826177 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:37:49.826226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-23 20:37:49.826233 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826238 | orchestrator | 2026-02-23 20:37:49.826243 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-23 20:37:49.826248 | orchestrator | Monday 23 February 2026 20:35:02 +0000 (0:00:02.007) 0:03:12.958 ******* 2026-02-23 20:37:49.826253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-23 20:37:49.826262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-23 20:37:49.826267 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-23 20:37:49.826280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-23 20:37:49.826285 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-23 20:37:49.826355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-23 20:37:49.826364 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826369 | orchestrator | 2026-02-23 20:37:49.826374 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-23 20:37:49.826379 | orchestrator | Monday 23 February 2026 20:35:04 +0000 (0:00:02.700) 0:03:15.659 ******* 2026-02-23 20:37:49.826384 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.826389 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.826399 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.826404 | orchestrator | 2026-02-23 20:37:49.826409 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-23 20:37:49.826414 | orchestrator | Monday 23 February 2026 20:35:06 +0000 (0:00:01.844) 0:03:17.504 ******* 2026-02-23 20:37:49.826419 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826423 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826428 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826433 | orchestrator | 2026-02-23 20:37:49.826438 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-23 20:37:49.826443 | orchestrator | Monday 23 February 2026 20:35:08 +0000 (0:00:01.438) 0:03:18.942 ******* 2026-02-23 20:37:49.826448 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826453 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826457 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826462 | orchestrator | 2026-02-23 20:37:49.826467 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-23 20:37:49.826472 | orchestrator | Monday 23 February 2026 20:35:08 +0000 (0:00:00.333) 0:03:19.276 ******* 2026-02-23 20:37:49.826477 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.826482 | orchestrator | 2026-02-23 20:37:49.826487 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-23 20:37:49.826491 | orchestrator | Monday 23 February 2026 20:35:09 +0000 (0:00:01.343) 0:03:20.620 ******* 2026-02-23 20:37:49.826497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-23 20:37:49.826515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-23 20:37:49.826521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-23 20:37:49.826526 | orchestrator | 2026-02-23 20:37:49.826531 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-23 20:37:49.826539 | orchestrator | Monday 23 February 2026 20:35:11 +0000 (0:00:01.413) 0:03:22.034 ******* 2026-02-23 20:37:49.826581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-23 20:37:49.826588 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-23 20:37:49.826598 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-23 20:37:49.826608 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826613 | orchestrator | 2026-02-23 20:37:49.826618 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-23 20:37:49.826623 | orchestrator | Monday 23 February 2026 20:35:11 +0000 (0:00:00.415) 0:03:22.450 ******* 2026-02-23 20:37:49.826628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-23 20:37:49.826638 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-23 20:37:49.826654 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-23 20:37:49.826670 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826677 | orchestrator | 2026-02-23 20:37:49.826685 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-23 20:37:49.826703 | orchestrator | Monday 23 February 2026 20:35:12 +0000 (0:00:00.889) 0:03:23.340 ******* 2026-02-23 20:37:49.826710 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826718 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826726 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826734 | orchestrator | 2026-02-23 20:37:49.826741 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-23 20:37:49.826748 | orchestrator | Monday 23 February 2026 20:35:13 +0000 (0:00:00.452) 0:03:23.793 ******* 2026-02-23 20:37:49.826755 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826763 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826771 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826778 | orchestrator | 2026-02-23 20:37:49.826786 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-23 20:37:49.826794 | orchestrator | Monday 23 February 2026 20:35:14 +0000 (0:00:01.365) 0:03:25.158 ******* 2026-02-23 20:37:49.826802 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.826810 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.826818 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.826825 | orchestrator | 2026-02-23 20:37:49.826833 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-23 20:37:49.826908 | orchestrator | Monday 23 February 2026 20:35:14 +0000 (0:00:00.331) 0:03:25.489 ******* 2026-02-23 20:37:49.826916 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.826921 | orchestrator | 2026-02-23 20:37:49.826926 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-23 20:37:49.826931 | orchestrator | Monday 23 February 2026 20:35:16 +0000 (0:00:01.447) 0:03:26.936 ******* 2026-02-23 20:37:49.826936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:37:49.826943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.826953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.826965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-23 20:37:49.827011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:37:49.827022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-23 20:37:49.827113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:37:49.827189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.827234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-23 20:37:49.827354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.827616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.827661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827669 | orchestrator | 2026-02-23 20:37:49.827674 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-23 20:37:49.827680 | orchestrator | Monday 23 February 2026 20:35:20 +0000 (0:00:04.083) 0:03:31.020 ******* 2026-02-23 20:37:49.827685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:37:49.827696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:37:49.827704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-23 20:37:49.827781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-23 20:37:49.827830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.827941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.827958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-23 20:37:49.827967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:37:49.828085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.828142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.828167 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.828173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.828196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.828249 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.828254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-23 20:37:49.828259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.828310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-23 20:37:49.828497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.828540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-23 20:37:49.828585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:37:49.828597 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.828603 | orchestrator | 2026-02-23 20:37:49.828609 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-23 20:37:49.828614 | orchestrator | Monday 23 February 2026 20:35:21 +0000 (0:00:01.626) 0:03:32.646 ******* 2026-02-23 20:37:49.828619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-23 20:37:49.828625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-23 20:37:49.828630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-23 20:37:49.828636 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.828641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-23 20:37:49.828645 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.828650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-23 20:37:49.828655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-23 20:37:49.828660 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.828665 | orchestrator | 2026-02-23 20:37:49.828670 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-23 20:37:49.828674 | orchestrator | Monday 23 February 2026 20:35:23 +0000 (0:00:01.961) 0:03:34.607 ******* 2026-02-23 20:37:49.828679 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.828684 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.828689 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.828694 | orchestrator | 2026-02-23 20:37:49.828699 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-23 20:37:49.828704 | orchestrator | Monday 23 February 2026 20:35:25 +0000 (0:00:01.360) 0:03:35.968 ******* 2026-02-23 20:37:49.828708 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.828713 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.828718 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.828722 | orchestrator | 2026-02-23 20:37:49.828727 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-23 20:37:49.828732 | orchestrator | Monday 23 February 2026 20:35:27 +0000 (0:00:02.046) 0:03:38.015 ******* 2026-02-23 20:37:49.828737 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.828742 | orchestrator | 2026-02-23 20:37:49.828750 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-23 20:37:49.828755 | orchestrator | Monday 23 February 2026 20:35:28 +0000 (0:00:01.115) 0:03:39.131 ******* 2026-02-23 20:37:49.828760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.828785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.828792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.828797 | orchestrator | 2026-02-23 20:37:49.828802 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-23 20:37:49.828807 | orchestrator | Monday 23 February 2026 20:35:31 +0000 (0:00:03.222) 0:03:42.353 ******* 2026-02-23 20:37:49.828812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.828817 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.828826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.828834 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.828853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.828859 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.828864 | orchestrator | 2026-02-23 20:37:49.828869 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-23 20:37:49.828874 | orchestrator | Monday 23 February 2026 20:35:32 +0000 (0:00:00.457) 0:03:42.811 ******* 2026-02-23 20:37:49.828879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-23 20:37:49.828885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-23 20:37:49.828891 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.828897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-23 20:37:49.828902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-23 20:37:49.828908 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.828913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-23 20:37:49.828918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-23 20:37:49.828923 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.828928 | orchestrator | 2026-02-23 20:37:49.828933 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-23 20:37:49.828939 | orchestrator | Monday 23 February 2026 20:35:32 +0000 (0:00:00.667) 0:03:43.478 ******* 2026-02-23 20:37:49.828944 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.828953 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.828958 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.828963 | orchestrator | 2026-02-23 20:37:49.828968 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-23 20:37:49.828973 | orchestrator | Monday 23 February 2026 20:35:34 +0000 (0:00:01.807) 0:03:45.286 ******* 2026-02-23 20:37:49.828978 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.828983 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.828988 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.828993 | orchestrator | 2026-02-23 20:37:49.828998 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-23 20:37:49.829003 | orchestrator | Monday 23 February 2026 20:35:36 +0000 (0:00:01.755) 0:03:47.042 ******* 2026-02-23 20:37:49.829010 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.829016 | orchestrator | 2026-02-23 20:37:49.829021 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-23 20:37:49.829026 | orchestrator | Monday 23 February 2026 20:35:37 +0000 (0:00:01.333) 0:03:48.375 ******* 2026-02-23 20:37:49.829032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.829054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.829078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.829104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829126 | orchestrator | 2026-02-23 20:37:49.829131 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-23 20:37:49.829137 | orchestrator | Monday 23 February 2026 20:35:41 +0000 (0:00:03.761) 0:03:52.136 ******* 2026-02-23 20:37:49.829145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.829151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829176 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.829191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829204 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.829229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.829243 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829249 | orchestrator | 2026-02-23 20:37:49.829254 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-23 20:37:49.829259 | orchestrator | Monday 23 February 2026 20:35:42 +0000 (0:00:00.953) 0:03:53.090 ******* 2026-02-23 20:37:49.829264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829289 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829310 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-23 20:37:49.829370 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829374 | orchestrator | 2026-02-23 20:37:49.829395 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-23 20:37:49.829400 | orchestrator | Monday 23 February 2026 20:35:43 +0000 (0:00:00.828) 0:03:53.918 ******* 2026-02-23 20:37:49.829405 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.829414 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.829419 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.829430 | orchestrator | 2026-02-23 20:37:49.829435 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-23 20:37:49.829440 | orchestrator | Monday 23 February 2026 20:35:44 +0000 (0:00:01.257) 0:03:55.176 ******* 2026-02-23 20:37:49.829444 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.829449 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.829453 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.829458 | orchestrator | 2026-02-23 20:37:49.829463 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-23 20:37:49.829468 | orchestrator | Monday 23 February 2026 20:35:46 +0000 (0:00:01.878) 0:03:57.054 ******* 2026-02-23 20:37:49.829472 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.829477 | orchestrator | 2026-02-23 20:37:49.829481 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-23 20:37:49.829486 | orchestrator | Monday 23 February 2026 20:35:47 +0000 (0:00:01.393) 0:03:58.448 ******* 2026-02-23 20:37:49.829490 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-23 20:37:49.829495 | orchestrator | 2026-02-23 20:37:49.829500 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-23 20:37:49.829504 | orchestrator | Monday 23 February 2026 20:35:48 +0000 (0:00:00.772) 0:03:59.220 ******* 2026-02-23 20:37:49.829509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-23 20:37:49.829515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-23 20:37:49.829523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-23 20:37:49.829528 | orchestrator | 2026-02-23 20:37:49.829533 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-23 20:37:49.829538 | orchestrator | Monday 23 February 2026 20:35:52 +0000 (0:00:04.312) 0:04:03.532 ******* 2026-02-23 20:37:49.829543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829548 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829561 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829586 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829590 | orchestrator | 2026-02-23 20:37:49.829595 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-23 20:37:49.829599 | orchestrator | Monday 23 February 2026 20:35:53 +0000 (0:00:00.958) 0:04:04.491 ******* 2026-02-23 20:37:49.829604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-23 20:37:49.829609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-23 20:37:49.829614 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-23 20:37:49.829623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-23 20:37:49.829628 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-23 20:37:49.829638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-23 20:37:49.829642 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829647 | orchestrator | 2026-02-23 20:37:49.829651 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-23 20:37:49.829656 | orchestrator | Monday 23 February 2026 20:35:55 +0000 (0:00:01.389) 0:04:05.880 ******* 2026-02-23 20:37:49.829660 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.829665 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.829673 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.829678 | orchestrator | 2026-02-23 20:37:49.829682 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-23 20:37:49.829687 | orchestrator | Monday 23 February 2026 20:35:57 +0000 (0:00:02.347) 0:04:08.228 ******* 2026-02-23 20:37:49.829692 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.829700 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.829704 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.829709 | orchestrator | 2026-02-23 20:37:49.829713 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-23 20:37:49.829718 | orchestrator | Monday 23 February 2026 20:36:00 +0000 (0:00:02.704) 0:04:10.933 ******* 2026-02-23 20:37:49.829723 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-23 20:37:49.829727 | orchestrator | 2026-02-23 20:37:49.829732 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-23 20:37:49.829737 | orchestrator | Monday 23 February 2026 20:36:01 +0000 (0:00:01.143) 0:04:12.076 ******* 2026-02-23 20:37:49.829742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829746 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829770 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829780 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829785 | orchestrator | 2026-02-23 20:37:49.829789 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-23 20:37:49.829794 | orchestrator | Monday 23 February 2026 20:36:02 +0000 (0:00:01.173) 0:04:13.250 ******* 2026-02-23 20:37:49.829799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829804 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829816 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-23 20:37:49.829829 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829833 | orchestrator | 2026-02-23 20:37:49.829838 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-23 20:37:49.829843 | orchestrator | Monday 23 February 2026 20:36:03 +0000 (0:00:01.142) 0:04:14.392 ******* 2026-02-23 20:37:49.829847 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829852 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829856 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829861 | orchestrator | 2026-02-23 20:37:49.829866 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-23 20:37:49.829870 | orchestrator | Monday 23 February 2026 20:36:05 +0000 (0:00:01.490) 0:04:15.882 ******* 2026-02-23 20:37:49.829875 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.829880 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.829884 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.829889 | orchestrator | 2026-02-23 20:37:49.829893 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-23 20:37:49.829898 | orchestrator | Monday 23 February 2026 20:36:07 +0000 (0:00:02.308) 0:04:18.190 ******* 2026-02-23 20:37:49.829903 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.829907 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.829912 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.829916 | orchestrator | 2026-02-23 20:37:49.829921 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-23 20:37:49.829925 | orchestrator | Monday 23 February 2026 20:36:10 +0000 (0:00:02.698) 0:04:20.888 ******* 2026-02-23 20:37:49.829930 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-23 20:37:49.829934 | orchestrator | 2026-02-23 20:37:49.829939 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-23 20:37:49.829944 | orchestrator | Monday 23 February 2026 20:36:10 +0000 (0:00:00.755) 0:04:21.644 ******* 2026-02-23 20:37:49.829963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-23 20:37:49.829968 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.829973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-23 20:37:49.829978 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.829983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-23 20:37:49.829991 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.829996 | orchestrator | 2026-02-23 20:37:49.830000 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-23 20:37:49.830005 | orchestrator | Monday 23 February 2026 20:36:12 +0000 (0:00:01.137) 0:04:22.781 ******* 2026-02-23 20:37:49.830010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-23 20:37:49.830041 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-23 20:37:49.830054 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-23 20:37:49.830063 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830068 | orchestrator | 2026-02-23 20:37:49.830072 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-23 20:37:49.830077 | orchestrator | Monday 23 February 2026 20:36:13 +0000 (0:00:01.186) 0:04:23.968 ******* 2026-02-23 20:37:49.830081 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830086 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830091 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830095 | orchestrator | 2026-02-23 20:37:49.830100 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-23 20:37:49.830104 | orchestrator | Monday 23 February 2026 20:36:14 +0000 (0:00:01.343) 0:04:25.311 ******* 2026-02-23 20:37:49.830109 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.830130 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.830136 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.830140 | orchestrator | 2026-02-23 20:37:49.830145 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-23 20:37:49.830149 | orchestrator | Monday 23 February 2026 20:36:16 +0000 (0:00:02.138) 0:04:27.450 ******* 2026-02-23 20:37:49.830154 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.830158 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.830163 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.830173 | orchestrator | 2026-02-23 20:37:49.830178 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-23 20:37:49.830182 | orchestrator | Monday 23 February 2026 20:36:19 +0000 (0:00:02.841) 0:04:30.292 ******* 2026-02-23 20:37:49.830187 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.830191 | orchestrator | 2026-02-23 20:37:49.830196 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-23 20:37:49.830201 | orchestrator | Monday 23 February 2026 20:36:21 +0000 (0:00:01.489) 0:04:31.781 ******* 2026-02-23 20:37:49.830206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.830212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:37:49.830220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.830253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.830258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:37:49.830263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.830280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.830302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:37:49.830307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.830322 | orchestrator | 2026-02-23 20:37:49.830340 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-23 20:37:49.830345 | orchestrator | Monday 23 February 2026 20:36:24 +0000 (0:00:03.119) 0:04:34.900 ******* 2026-02-23 20:37:49.830352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.830357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:37:49.830381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.830396 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.830409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:37:49.830414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.830446 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.830456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:37:49.830463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:37:49.830489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:37:49.830495 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830499 | orchestrator | 2026-02-23 20:37:49.830504 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-23 20:37:49.830509 | orchestrator | Monday 23 February 2026 20:36:24 +0000 (0:00:00.645) 0:04:35.546 ******* 2026-02-23 20:37:49.830513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-23 20:37:49.830518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-23 20:37:49.830523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-23 20:37:49.830529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-23 20:37:49.830534 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830538 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-23 20:37:49.830548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-23 20:37:49.830552 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830557 | orchestrator | 2026-02-23 20:37:49.830562 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-23 20:37:49.830566 | orchestrator | Monday 23 February 2026 20:36:26 +0000 (0:00:01.265) 0:04:36.811 ******* 2026-02-23 20:37:49.830571 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.830576 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.830580 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.830585 | orchestrator | 2026-02-23 20:37:49.830589 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-23 20:37:49.830594 | orchestrator | Monday 23 February 2026 20:36:27 +0000 (0:00:01.293) 0:04:38.105 ******* 2026-02-23 20:37:49.830599 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.830603 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.830608 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.830612 | orchestrator | 2026-02-23 20:37:49.830617 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-23 20:37:49.830621 | orchestrator | Monday 23 February 2026 20:36:29 +0000 (0:00:01.882) 0:04:39.987 ******* 2026-02-23 20:37:49.830626 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.830634 | orchestrator | 2026-02-23 20:37:49.830642 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-23 20:37:49.830646 | orchestrator | Monday 23 February 2026 20:36:30 +0000 (0:00:01.446) 0:04:41.434 ******* 2026-02-23 20:37:49.830652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:37:49.830670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:37:49.830676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:37:49.830681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:37:49.830687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:37:49.830709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:37:49.830715 | orchestrator | 2026-02-23 20:37:49.830720 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-23 20:37:49.830725 | orchestrator | Monday 23 February 2026 20:36:35 +0000 (0:00:04.493) 0:04:45.927 ******* 2026-02-23 20:37:49.830729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:37:49.830734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:37:49.830743 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:37:49.830817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:37:49.830838 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:37:49.830849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:37:49.830858 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830862 | orchestrator | 2026-02-23 20:37:49.830867 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-23 20:37:49.830872 | orchestrator | Monday 23 February 2026 20:36:35 +0000 (0:00:00.602) 0:04:46.530 ******* 2026-02-23 20:37:49.830877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-23 20:37:49.830884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-23 20:37:49.830889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-23 20:37:49.830895 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-23 20:37:49.830904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-23 20:37:49.830909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-23 20:37:49.830914 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-23 20:37:49.830923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-23 20:37:49.830941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-23 20:37:49.830947 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830952 | orchestrator | 2026-02-23 20:37:49.830957 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-23 20:37:49.830961 | orchestrator | Monday 23 February 2026 20:36:36 +0000 (0:00:00.787) 0:04:47.317 ******* 2026-02-23 20:37:49.830966 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830970 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.830975 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.830979 | orchestrator | 2026-02-23 20:37:49.830984 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-23 20:37:49.830988 | orchestrator | Monday 23 February 2026 20:36:37 +0000 (0:00:00.644) 0:04:47.961 ******* 2026-02-23 20:37:49.830993 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.830998 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831002 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831007 | orchestrator | 2026-02-23 20:37:49.831011 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-23 20:37:49.831016 | orchestrator | Monday 23 February 2026 20:36:38 +0000 (0:00:01.099) 0:04:49.061 ******* 2026-02-23 20:37:49.831024 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.831029 | orchestrator | 2026-02-23 20:37:49.831034 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-23 20:37:49.831038 | orchestrator | Monday 23 February 2026 20:36:39 +0000 (0:00:01.303) 0:04:50.365 ******* 2026-02-23 20:37:49.831043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:37:49.831048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:37:49.831057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:37:49.831086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:37:49.831099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:37:49.831122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:37:49.831141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:37:49.831169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-23 20:37:49.831174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:37:49.831202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-23 20:37:49.831207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:37:49.831230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-23 20:37:49.831240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831257 | orchestrator | 2026-02-23 20:37:49.831262 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-23 20:37:49.831267 | orchestrator | Monday 23 February 2026 20:36:43 +0000 (0:00:04.037) 0:04:54.402 ******* 2026-02-23 20:37:49.831275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-23 20:37:49.831285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:37:49.831290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-23 20:37:49.831312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-23 20:37:49.831324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-23 20:37:49.831346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:37:49.831351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831382 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-23 20:37:49.831400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-23 20:37:49.831408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-23 20:37:49.831413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:37:49.831421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831448 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-23 20:37:49.831472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-23 20:37:49.831477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:37:49.831487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:37:49.831492 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831497 | orchestrator | 2026-02-23 20:37:49.831501 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-23 20:37:49.831506 | orchestrator | Monday 23 February 2026 20:36:44 +0000 (0:00:00.762) 0:04:55.165 ******* 2026-02-23 20:37:49.831511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-23 20:37:49.831516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-23 20:37:49.831523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-23 20:37:49.831528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-23 20:37:49.831543 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-23 20:37:49.831553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-23 20:37:49.831558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-23 20:37:49.831563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-23 20:37:49.831568 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-23 20:37:49.831580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-23 20:37:49.831585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-23 20:37:49.831590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-23 20:37:49.831594 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831599 | orchestrator | 2026-02-23 20:37:49.831604 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-23 20:37:49.831608 | orchestrator | Monday 23 February 2026 20:36:45 +0000 (0:00:00.923) 0:04:56.089 ******* 2026-02-23 20:37:49.831613 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831617 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831622 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831627 | orchestrator | 2026-02-23 20:37:49.831631 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-23 20:37:49.831636 | orchestrator | Monday 23 February 2026 20:36:45 +0000 (0:00:00.386) 0:04:56.476 ******* 2026-02-23 20:37:49.831641 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831645 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831650 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831654 | orchestrator | 2026-02-23 20:37:49.831659 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-23 20:37:49.831664 | orchestrator | Monday 23 February 2026 20:36:46 +0000 (0:00:01.206) 0:04:57.682 ******* 2026-02-23 20:37:49.831668 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.831673 | orchestrator | 2026-02-23 20:37:49.831677 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-23 20:37:49.831682 | orchestrator | Monday 23 February 2026 20:36:48 +0000 (0:00:01.528) 0:04:59.210 ******* 2026-02-23 20:37:49.831693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:37:49.831699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:37:49.831707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-23 20:37:49.831712 | orchestrator | 2026-02-23 20:37:49.831717 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-23 20:37:49.831722 | orchestrator | Monday 23 February 2026 20:36:50 +0000 (0:00:02.485) 0:05:01.696 ******* 2026-02-23 20:37:49.831726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-23 20:37:49.831735 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-23 20:37:49.831747 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-23 20:37:49.831757 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831762 | orchestrator | 2026-02-23 20:37:49.831767 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-23 20:37:49.831774 | orchestrator | Monday 23 February 2026 20:36:51 +0000 (0:00:00.644) 0:05:02.340 ******* 2026-02-23 20:37:49.831778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-23 20:37:49.831783 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-23 20:37:49.831792 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-23 20:37:49.831802 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831806 | orchestrator | 2026-02-23 20:37:49.831811 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-23 20:37:49.831815 | orchestrator | Monday 23 February 2026 20:36:52 +0000 (0:00:00.598) 0:05:02.939 ******* 2026-02-23 20:37:49.831820 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831825 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831829 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831834 | orchestrator | 2026-02-23 20:37:49.831839 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-23 20:37:49.831843 | orchestrator | Monday 23 February 2026 20:36:52 +0000 (0:00:00.486) 0:05:03.426 ******* 2026-02-23 20:37:49.831853 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831858 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831862 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.831867 | orchestrator | 2026-02-23 20:37:49.831871 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-23 20:37:49.831876 | orchestrator | Monday 23 February 2026 20:36:54 +0000 (0:00:01.363) 0:05:04.790 ******* 2026-02-23 20:37:49.831881 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:37:49.831885 | orchestrator | 2026-02-23 20:37:49.831890 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-23 20:37:49.831895 | orchestrator | Monday 23 February 2026 20:36:55 +0000 (0:00:01.814) 0:05:06.604 ******* 2026-02-23 20:37:49.831902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.831908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.831915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.831920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.831929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.831936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-23 20:37:49.831941 | orchestrator | 2026-02-23 20:37:49.831946 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-23 20:37:49.831951 | orchestrator | Monday 23 February 2026 20:37:01 +0000 (0:00:06.019) 0:05:12.623 ******* 2026-02-23 20:37:49.831958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.831963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.831971 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.831976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.831984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.831989 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.831994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.832001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-23 20:37:49.832009 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832014 | orchestrator | 2026-02-23 20:37:49.832018 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-23 20:37:49.832023 | orchestrator | Monday 23 February 2026 20:37:02 +0000 (0:00:00.681) 0:05:13.305 ******* 2026-02-23 20:37:49.832028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832047 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832077 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-23 20:37:49.832096 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832101 | orchestrator | 2026-02-23 20:37:49.832106 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-23 20:37:49.832114 | orchestrator | Monday 23 February 2026 20:37:04 +0000 (0:00:01.630) 0:05:14.935 ******* 2026-02-23 20:37:49.832118 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.832123 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.832127 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.832132 | orchestrator | 2026-02-23 20:37:49.832137 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-23 20:37:49.832144 | orchestrator | Monday 23 February 2026 20:37:05 +0000 (0:00:01.284) 0:05:16.220 ******* 2026-02-23 20:37:49.832148 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.832153 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.832157 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.832162 | orchestrator | 2026-02-23 20:37:49.832167 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-23 20:37:49.832171 | orchestrator | Monday 23 February 2026 20:37:07 +0000 (0:00:02.160) 0:05:18.380 ******* 2026-02-23 20:37:49.832176 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832180 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832185 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832189 | orchestrator | 2026-02-23 20:37:49.832194 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-23 20:37:49.832199 | orchestrator | Monday 23 February 2026 20:37:07 +0000 (0:00:00.322) 0:05:18.703 ******* 2026-02-23 20:37:49.832203 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832208 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832213 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832217 | orchestrator | 2026-02-23 20:37:49.832222 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-23 20:37:49.832226 | orchestrator | Monday 23 February 2026 20:37:08 +0000 (0:00:00.310) 0:05:19.013 ******* 2026-02-23 20:37:49.832231 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832236 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832240 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832245 | orchestrator | 2026-02-23 20:37:49.832249 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-23 20:37:49.832254 | orchestrator | Monday 23 February 2026 20:37:08 +0000 (0:00:00.664) 0:05:19.677 ******* 2026-02-23 20:37:49.832259 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832263 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832268 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832272 | orchestrator | 2026-02-23 20:37:49.832277 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-23 20:37:49.832282 | orchestrator | Monday 23 February 2026 20:37:09 +0000 (0:00:00.321) 0:05:19.998 ******* 2026-02-23 20:37:49.832286 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832291 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832296 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832300 | orchestrator | 2026-02-23 20:37:49.832305 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-23 20:37:49.832309 | orchestrator | Monday 23 February 2026 20:37:09 +0000 (0:00:00.310) 0:05:20.309 ******* 2026-02-23 20:37:49.832314 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832319 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832323 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832341 | orchestrator | 2026-02-23 20:37:49.832346 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-23 20:37:49.832351 | orchestrator | Monday 23 February 2026 20:37:10 +0000 (0:00:00.839) 0:05:21.148 ******* 2026-02-23 20:37:49.832355 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832360 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832365 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832369 | orchestrator | 2026-02-23 20:37:49.832374 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-23 20:37:49.832383 | orchestrator | Monday 23 February 2026 20:37:11 +0000 (0:00:00.705) 0:05:21.854 ******* 2026-02-23 20:37:49.832387 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832392 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832396 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832401 | orchestrator | 2026-02-23 20:37:49.832405 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-23 20:37:49.832410 | orchestrator | Monday 23 February 2026 20:37:11 +0000 (0:00:00.340) 0:05:22.194 ******* 2026-02-23 20:37:49.832417 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832422 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832426 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832431 | orchestrator | 2026-02-23 20:37:49.832436 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-23 20:37:49.832440 | orchestrator | Monday 23 February 2026 20:37:12 +0000 (0:00:00.886) 0:05:23.080 ******* 2026-02-23 20:37:49.832445 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832449 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832454 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832459 | orchestrator | 2026-02-23 20:37:49.832463 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-23 20:37:49.832468 | orchestrator | Monday 23 February 2026 20:37:13 +0000 (0:00:01.217) 0:05:24.297 ******* 2026-02-23 20:37:49.832473 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832477 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832481 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832486 | orchestrator | 2026-02-23 20:37:49.832491 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-23 20:37:49.832495 | orchestrator | Monday 23 February 2026 20:37:14 +0000 (0:00:00.910) 0:05:25.208 ******* 2026-02-23 20:37:49.832500 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.832505 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.832509 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.832514 | orchestrator | 2026-02-23 20:37:49.832519 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-23 20:37:49.832523 | orchestrator | Monday 23 February 2026 20:37:19 +0000 (0:00:04.696) 0:05:29.905 ******* 2026-02-23 20:37:49.832528 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832532 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832537 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832541 | orchestrator | 2026-02-23 20:37:49.832546 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-23 20:37:49.832551 | orchestrator | Monday 23 February 2026 20:37:21 +0000 (0:00:02.767) 0:05:32.672 ******* 2026-02-23 20:37:49.832555 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.832560 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.832564 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.832569 | orchestrator | 2026-02-23 20:37:49.832573 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-23 20:37:49.832578 | orchestrator | Monday 23 February 2026 20:37:31 +0000 (0:00:09.140) 0:05:41.813 ******* 2026-02-23 20:37:49.832585 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832590 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832594 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832599 | orchestrator | 2026-02-23 20:37:49.832604 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-23 20:37:49.832608 | orchestrator | Monday 23 February 2026 20:37:34 +0000 (0:00:03.741) 0:05:45.554 ******* 2026-02-23 20:37:49.832613 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:37:49.832617 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:37:49.832622 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:37:49.832627 | orchestrator | 2026-02-23 20:37:49.832631 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-23 20:37:49.832636 | orchestrator | Monday 23 February 2026 20:37:43 +0000 (0:00:09.012) 0:05:54.567 ******* 2026-02-23 20:37:49.832641 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832648 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832653 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832657 | orchestrator | 2026-02-23 20:37:49.832662 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-23 20:37:49.832667 | orchestrator | Monday 23 February 2026 20:37:44 +0000 (0:00:00.298) 0:05:54.866 ******* 2026-02-23 20:37:49.832671 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832676 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832681 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832685 | orchestrator | 2026-02-23 20:37:49.832690 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-23 20:37:49.832694 | orchestrator | Monday 23 February 2026 20:37:44 +0000 (0:00:00.641) 0:05:55.507 ******* 2026-02-23 20:37:49.832699 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832703 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832708 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832712 | orchestrator | 2026-02-23 20:37:49.832717 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-23 20:37:49.832722 | orchestrator | Monday 23 February 2026 20:37:45 +0000 (0:00:00.363) 0:05:55.871 ******* 2026-02-23 20:37:49.832726 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832731 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832736 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832740 | orchestrator | 2026-02-23 20:37:49.832745 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-23 20:37:49.832749 | orchestrator | Monday 23 February 2026 20:37:45 +0000 (0:00:00.363) 0:05:56.235 ******* 2026-02-23 20:37:49.832754 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832759 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832763 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832768 | orchestrator | 2026-02-23 20:37:49.832772 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-23 20:37:49.832777 | orchestrator | Monday 23 February 2026 20:37:45 +0000 (0:00:00.352) 0:05:56.588 ******* 2026-02-23 20:37:49.832782 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:37:49.832786 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:37:49.832791 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:37:49.832795 | orchestrator | 2026-02-23 20:37:49.832800 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-23 20:37:49.832805 | orchestrator | Monday 23 February 2026 20:37:46 +0000 (0:00:00.322) 0:05:56.910 ******* 2026-02-23 20:37:49.832809 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832814 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832818 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832823 | orchestrator | 2026-02-23 20:37:49.832828 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-23 20:37:49.832832 | orchestrator | Monday 23 February 2026 20:37:47 +0000 (0:00:01.301) 0:05:58.212 ******* 2026-02-23 20:37:49.832837 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:37:49.832842 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:37:49.832848 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:37:49.832853 | orchestrator | 2026-02-23 20:37:49.832858 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:37:49.832863 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-23 20:37:49.832868 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-23 20:37:49.832872 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-23 20:37:49.832877 | orchestrator | 2026-02-23 20:37:49.832882 | orchestrator | 2026-02-23 20:37:49.832886 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:37:49.832894 | orchestrator | Monday 23 February 2026 20:37:48 +0000 (0:00:00.869) 0:05:59.081 ******* 2026-02-23 20:37:49.832899 | orchestrator | =============================================================================== 2026-02-23 20:37:49.832903 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.14s 2026-02-23 20:37:49.832908 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.01s 2026-02-23 20:37:49.832913 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.44s 2026-02-23 20:37:49.832917 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.02s 2026-02-23 20:37:49.832922 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.23s 2026-02-23 20:37:49.832926 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.13s 2026-02-23 20:37:49.832931 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.88s 2026-02-23 20:37:49.832936 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.70s 2026-02-23 20:37:49.832940 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.49s 2026-02-23 20:37:49.832947 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.31s 2026-02-23 20:37:49.832952 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.08s 2026-02-23 20:37:49.832957 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.04s 2026-02-23 20:37:49.832961 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.93s 2026-02-23 20:37:49.832966 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.87s 2026-02-23 20:37:49.832971 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.77s 2026-02-23 20:37:49.832975 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.76s 2026-02-23 20:37:49.832980 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 3.76s 2026-02-23 20:37:49.832984 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.74s 2026-02-23 20:37:49.832989 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.59s 2026-02-23 20:37:49.832994 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.54s 2026-02-23 20:37:49.833008 | orchestrator | 2026-02-23 20:37:49 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:49.833013 | orchestrator | 2026-02-23 20:37:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:52.855759 | orchestrator | 2026-02-23 20:37:52 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:37:52.861023 | orchestrator | 2026-02-23 20:37:52 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:37:52.863230 | orchestrator | 2026-02-23 20:37:52 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:52.863889 | orchestrator | 2026-02-23 20:37:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:55.893211 | orchestrator | 2026-02-23 20:37:55 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:37:55.893893 | orchestrator | 2026-02-23 20:37:55 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:37:55.894843 | orchestrator | 2026-02-23 20:37:55 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:55.894889 | orchestrator | 2026-02-23 20:37:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:37:58.919840 | orchestrator | 2026-02-23 20:37:58 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:37:58.920058 | orchestrator | 2026-02-23 20:37:58 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:37:58.922992 | orchestrator | 2026-02-23 20:37:58 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:37:58.923057 | orchestrator | 2026-02-23 20:37:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:01.959844 | orchestrator | 2026-02-23 20:38:01 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:01.961360 | orchestrator | 2026-02-23 20:38:01 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:01.962271 | orchestrator | 2026-02-23 20:38:01 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:01.962303 | orchestrator | 2026-02-23 20:38:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:04.990043 | orchestrator | 2026-02-23 20:38:04 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:04.990582 | orchestrator | 2026-02-23 20:38:04 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:04.991542 | orchestrator | 2026-02-23 20:38:04 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:04.992790 | orchestrator | 2026-02-23 20:38:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:08.026672 | orchestrator | 2026-02-23 20:38:08 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:08.029707 | orchestrator | 2026-02-23 20:38:08 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:08.031855 | orchestrator | 2026-02-23 20:38:08 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:08.031974 | orchestrator | 2026-02-23 20:38:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:11.056422 | orchestrator | 2026-02-23 20:38:11 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:11.057756 | orchestrator | 2026-02-23 20:38:11 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:11.058572 | orchestrator | 2026-02-23 20:38:11 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:11.058643 | orchestrator | 2026-02-23 20:38:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:14.089961 | orchestrator | 2026-02-23 20:38:14 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:14.091667 | orchestrator | 2026-02-23 20:38:14 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:14.093773 | orchestrator | 2026-02-23 20:38:14 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:14.093821 | orchestrator | 2026-02-23 20:38:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:17.119089 | orchestrator | 2026-02-23 20:38:17 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:17.119408 | orchestrator | 2026-02-23 20:38:17 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:17.120015 | orchestrator | 2026-02-23 20:38:17 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:17.120044 | orchestrator | 2026-02-23 20:38:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:20.151470 | orchestrator | 2026-02-23 20:38:20 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:20.151847 | orchestrator | 2026-02-23 20:38:20 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:20.153343 | orchestrator | 2026-02-23 20:38:20 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:20.153432 | orchestrator | 2026-02-23 20:38:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:23.186467 | orchestrator | 2026-02-23 20:38:23 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:23.187959 | orchestrator | 2026-02-23 20:38:23 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:23.188875 | orchestrator | 2026-02-23 20:38:23 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:23.188910 | orchestrator | 2026-02-23 20:38:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:26.237352 | orchestrator | 2026-02-23 20:38:26 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:26.239265 | orchestrator | 2026-02-23 20:38:26 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:26.240474 | orchestrator | 2026-02-23 20:38:26 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:26.240516 | orchestrator | 2026-02-23 20:38:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:29.286800 | orchestrator | 2026-02-23 20:38:29 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:29.289136 | orchestrator | 2026-02-23 20:38:29 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:29.293703 | orchestrator | 2026-02-23 20:38:29 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:29.294161 | orchestrator | 2026-02-23 20:38:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:32.331099 | orchestrator | 2026-02-23 20:38:32 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:32.333456 | orchestrator | 2026-02-23 20:38:32 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:32.335566 | orchestrator | 2026-02-23 20:38:32 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:32.335612 | orchestrator | 2026-02-23 20:38:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:35.372792 | orchestrator | 2026-02-23 20:38:35 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:35.373671 | orchestrator | 2026-02-23 20:38:35 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:35.374906 | orchestrator | 2026-02-23 20:38:35 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:35.374948 | orchestrator | 2026-02-23 20:38:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:38.410057 | orchestrator | 2026-02-23 20:38:38 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:38.410200 | orchestrator | 2026-02-23 20:38:38 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:38.410789 | orchestrator | 2026-02-23 20:38:38 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:38.410822 | orchestrator | 2026-02-23 20:38:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:41.452626 | orchestrator | 2026-02-23 20:38:41 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:41.453995 | orchestrator | 2026-02-23 20:38:41 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:41.455667 | orchestrator | 2026-02-23 20:38:41 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:41.455759 | orchestrator | 2026-02-23 20:38:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:44.493690 | orchestrator | 2026-02-23 20:38:44 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:44.494410 | orchestrator | 2026-02-23 20:38:44 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:44.495470 | orchestrator | 2026-02-23 20:38:44 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:44.495517 | orchestrator | 2026-02-23 20:38:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:47.533842 | orchestrator | 2026-02-23 20:38:47 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:47.535335 | orchestrator | 2026-02-23 20:38:47 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:47.536824 | orchestrator | 2026-02-23 20:38:47 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:47.536855 | orchestrator | 2026-02-23 20:38:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:50.575908 | orchestrator | 2026-02-23 20:38:50 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:50.576573 | orchestrator | 2026-02-23 20:38:50 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:50.577667 | orchestrator | 2026-02-23 20:38:50 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:50.577882 | orchestrator | 2026-02-23 20:38:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:53.627895 | orchestrator | 2026-02-23 20:38:53 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:53.630958 | orchestrator | 2026-02-23 20:38:53 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:53.631021 | orchestrator | 2026-02-23 20:38:53 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:53.631030 | orchestrator | 2026-02-23 20:38:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:56.682776 | orchestrator | 2026-02-23 20:38:56 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:56.684357 | orchestrator | 2026-02-23 20:38:56 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:56.686507 | orchestrator | 2026-02-23 20:38:56 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:56.686994 | orchestrator | 2026-02-23 20:38:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:38:59.727313 | orchestrator | 2026-02-23 20:38:59 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:38:59.728215 | orchestrator | 2026-02-23 20:38:59 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:38:59.730296 | orchestrator | 2026-02-23 20:38:59 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:38:59.730327 | orchestrator | 2026-02-23 20:38:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:02.782608 | orchestrator | 2026-02-23 20:39:02 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:02.786277 | orchestrator | 2026-02-23 20:39:02 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:02.787878 | orchestrator | 2026-02-23 20:39:02 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:02.788055 | orchestrator | 2026-02-23 20:39:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:05.841786 | orchestrator | 2026-02-23 20:39:05 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:05.844046 | orchestrator | 2026-02-23 20:39:05 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:05.845602 | orchestrator | 2026-02-23 20:39:05 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:05.845640 | orchestrator | 2026-02-23 20:39:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:08.875240 | orchestrator | 2026-02-23 20:39:08 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:08.875905 | orchestrator | 2026-02-23 20:39:08 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:08.876831 | orchestrator | 2026-02-23 20:39:08 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:08.876900 | orchestrator | 2026-02-23 20:39:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:11.919685 | orchestrator | 2026-02-23 20:39:11 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:11.921431 | orchestrator | 2026-02-23 20:39:11 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:11.923002 | orchestrator | 2026-02-23 20:39:11 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:11.923040 | orchestrator | 2026-02-23 20:39:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:14.970155 | orchestrator | 2026-02-23 20:39:14 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:14.971553 | orchestrator | 2026-02-23 20:39:14 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:14.974565 | orchestrator | 2026-02-23 20:39:14 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:14.974605 | orchestrator | 2026-02-23 20:39:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:18.021135 | orchestrator | 2026-02-23 20:39:18 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:18.022948 | orchestrator | 2026-02-23 20:39:18 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:18.025472 | orchestrator | 2026-02-23 20:39:18 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:18.025513 | orchestrator | 2026-02-23 20:39:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:21.067575 | orchestrator | 2026-02-23 20:39:21 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:21.069410 | orchestrator | 2026-02-23 20:39:21 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:21.070687 | orchestrator | 2026-02-23 20:39:21 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:21.070726 | orchestrator | 2026-02-23 20:39:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:24.111776 | orchestrator | 2026-02-23 20:39:24 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:24.113542 | orchestrator | 2026-02-23 20:39:24 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:24.115043 | orchestrator | 2026-02-23 20:39:24 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:24.115087 | orchestrator | 2026-02-23 20:39:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:27.165887 | orchestrator | 2026-02-23 20:39:27 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:27.167940 | orchestrator | 2026-02-23 20:39:27 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:27.170305 | orchestrator | 2026-02-23 20:39:27 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:27.170350 | orchestrator | 2026-02-23 20:39:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:30.216997 | orchestrator | 2026-02-23 20:39:30 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:30.218089 | orchestrator | 2026-02-23 20:39:30 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:30.219768 | orchestrator | 2026-02-23 20:39:30 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:30.219808 | orchestrator | 2026-02-23 20:39:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:33.255646 | orchestrator | 2026-02-23 20:39:33 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:33.259010 | orchestrator | 2026-02-23 20:39:33 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:33.259706 | orchestrator | 2026-02-23 20:39:33 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:33.259735 | orchestrator | 2026-02-23 20:39:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:36.308462 | orchestrator | 2026-02-23 20:39:36 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:36.309641 | orchestrator | 2026-02-23 20:39:36 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:36.311116 | orchestrator | 2026-02-23 20:39:36 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:36.311167 | orchestrator | 2026-02-23 20:39:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:39.365678 | orchestrator | 2026-02-23 20:39:39 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:39.366718 | orchestrator | 2026-02-23 20:39:39 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:39.368691 | orchestrator | 2026-02-23 20:39:39 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:39.368772 | orchestrator | 2026-02-23 20:39:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:42.423551 | orchestrator | 2026-02-23 20:39:42 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:42.426876 | orchestrator | 2026-02-23 20:39:42 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:42.429056 | orchestrator | 2026-02-23 20:39:42 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:42.429258 | orchestrator | 2026-02-23 20:39:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:45.469260 | orchestrator | 2026-02-23 20:39:45 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:45.471403 | orchestrator | 2026-02-23 20:39:45 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:45.474950 | orchestrator | 2026-02-23 20:39:45 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:45.475683 | orchestrator | 2026-02-23 20:39:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:48.517719 | orchestrator | 2026-02-23 20:39:48 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:48.518917 | orchestrator | 2026-02-23 20:39:48 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:48.519982 | orchestrator | 2026-02-23 20:39:48 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:48.520013 | orchestrator | 2026-02-23 20:39:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:51.560452 | orchestrator | 2026-02-23 20:39:51 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:51.561661 | orchestrator | 2026-02-23 20:39:51 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:51.563009 | orchestrator | 2026-02-23 20:39:51 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:51.563047 | orchestrator | 2026-02-23 20:39:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:54.604395 | orchestrator | 2026-02-23 20:39:54 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:54.606579 | orchestrator | 2026-02-23 20:39:54 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:54.608812 | orchestrator | 2026-02-23 20:39:54 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:54.608981 | orchestrator | 2026-02-23 20:39:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:39:57.657290 | orchestrator | 2026-02-23 20:39:57 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:39:57.658180 | orchestrator | 2026-02-23 20:39:57 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:39:57.660593 | orchestrator | 2026-02-23 20:39:57 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state STARTED 2026-02-23 20:39:57.660623 | orchestrator | 2026-02-23 20:39:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:00.717332 | orchestrator | 2026-02-23 20:40:00 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:00.720686 | orchestrator | 2026-02-23 20:40:00 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:00.727626 | orchestrator | 2026-02-23 20:40:00 | INFO  | Task 19d2713b-213a-4108-8c74-1755952f1568 is in state SUCCESS 2026-02-23 20:40:00.729373 | orchestrator | 2026-02-23 20:40:00.729398 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-23 20:40:00.729402 | orchestrator | 2.16.14 2026-02-23 20:40:00.729406 | orchestrator | 2026-02-23 20:40:00.729411 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-23 20:40:00.729417 | orchestrator | 2026-02-23 20:40:00.729422 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-23 20:40:00.729427 | orchestrator | Monday 23 February 2026 20:29:28 +0000 (0:00:00.756) 0:00:00.756 ******* 2026-02-23 20:40:00.729433 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.729494 | orchestrator | 2026-02-23 20:40:00.729502 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-23 20:40:00.729507 | orchestrator | Monday 23 February 2026 20:29:30 +0000 (0:00:01.187) 0:00:01.944 ******* 2026-02-23 20:40:00.729510 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729514 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729517 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729520 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729523 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729526 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729529 | orchestrator | 2026-02-23 20:40:00.729532 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-23 20:40:00.729543 | orchestrator | Monday 23 February 2026 20:29:31 +0000 (0:00:01.571) 0:00:03.515 ******* 2026-02-23 20:40:00.729547 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729550 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729644 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729648 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729651 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729654 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729657 | orchestrator | 2026-02-23 20:40:00.729661 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-23 20:40:00.729664 | orchestrator | Monday 23 February 2026 20:29:32 +0000 (0:00:00.855) 0:00:04.371 ******* 2026-02-23 20:40:00.729667 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729670 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729673 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729676 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729679 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729682 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729685 | orchestrator | 2026-02-23 20:40:00.729689 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-23 20:40:00.729692 | orchestrator | Monday 23 February 2026 20:29:33 +0000 (0:00:00.948) 0:00:05.319 ******* 2026-02-23 20:40:00.729695 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729698 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729701 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729704 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729707 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729734 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729737 | orchestrator | 2026-02-23 20:40:00.729740 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-23 20:40:00.729743 | orchestrator | Monday 23 February 2026 20:29:34 +0000 (0:00:00.825) 0:00:06.145 ******* 2026-02-23 20:40:00.729746 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729750 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729753 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729756 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729759 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729762 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729765 | orchestrator | 2026-02-23 20:40:00.729768 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-23 20:40:00.729774 | orchestrator | Monday 23 February 2026 20:29:34 +0000 (0:00:00.534) 0:00:06.679 ******* 2026-02-23 20:40:00.729778 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729781 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729784 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729787 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729790 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729793 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729797 | orchestrator | 2026-02-23 20:40:00.729800 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-23 20:40:00.729803 | orchestrator | Monday 23 February 2026 20:29:35 +0000 (0:00:00.736) 0:00:07.416 ******* 2026-02-23 20:40:00.729806 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.729809 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.729813 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.729816 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.729819 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.729822 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.729825 | orchestrator | 2026-02-23 20:40:00.729828 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-23 20:40:00.729831 | orchestrator | Monday 23 February 2026 20:29:36 +0000 (0:00:00.776) 0:00:08.193 ******* 2026-02-23 20:40:00.729834 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729837 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729840 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729847 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729850 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729853 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729856 | orchestrator | 2026-02-23 20:40:00.729860 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-23 20:40:00.729863 | orchestrator | Monday 23 February 2026 20:29:37 +0000 (0:00:01.011) 0:00:09.204 ******* 2026-02-23 20:40:00.729866 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:40:00.729869 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:40:00.729872 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:40:00.729875 | orchestrator | 2026-02-23 20:40:00.729879 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-23 20:40:00.729882 | orchestrator | Monday 23 February 2026 20:29:37 +0000 (0:00:00.699) 0:00:09.904 ******* 2026-02-23 20:40:00.729885 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.729888 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.729891 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.729900 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.729903 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.729906 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.729909 | orchestrator | 2026-02-23 20:40:00.729912 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-23 20:40:00.729916 | orchestrator | Monday 23 February 2026 20:29:39 +0000 (0:00:01.201) 0:00:11.105 ******* 2026-02-23 20:40:00.729919 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:40:00.729922 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:40:00.729925 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:40:00.729928 | orchestrator | 2026-02-23 20:40:00.729931 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-23 20:40:00.729934 | orchestrator | Monday 23 February 2026 20:29:41 +0000 (0:00:02.301) 0:00:13.406 ******* 2026-02-23 20:40:00.729938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-23 20:40:00.729941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-23 20:40:00.729944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-23 20:40:00.729947 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.729951 | orchestrator | 2026-02-23 20:40:00.729954 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-23 20:40:00.729957 | orchestrator | Monday 23 February 2026 20:29:42 +0000 (0:00:01.235) 0:00:14.642 ******* 2026-02-23 20:40:00.729961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.729966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.729969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.729972 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.729975 | orchestrator | 2026-02-23 20:40:00.729979 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-23 20:40:00.729982 | orchestrator | Monday 23 February 2026 20:29:43 +0000 (0:00:00.985) 0:00:15.628 ******* 2026-02-23 20:40:00.729988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.729995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.729998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.730002 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730005 | orchestrator | 2026-02-23 20:40:00.730008 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-23 20:40:00.730039 | orchestrator | Monday 23 February 2026 20:29:44 +0000 (0:00:00.668) 0:00:16.297 ******* 2026-02-23 20:40:00.730049 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-23 20:29:39.754421', 'end': '2026-02-23 20:29:39.868783', 'delta': '0:00:00.114362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.730056 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-23 20:29:40.617499', 'end': '2026-02-23 20:29:40.728106', 'delta': '0:00:00.110607', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.730062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-23 20:29:41.229201', 'end': '2026-02-23 20:29:41.331486', 'delta': '0:00:00.102285', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.730067 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730072 | orchestrator | 2026-02-23 20:40:00.730077 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-23 20:40:00.730082 | orchestrator | Monday 23 February 2026 20:29:44 +0000 (0:00:00.265) 0:00:16.562 ******* 2026-02-23 20:40:00.730091 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.730096 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.730102 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.730107 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.730112 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.730117 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.730121 | orchestrator | 2026-02-23 20:40:00.730124 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-23 20:40:00.730127 | orchestrator | Monday 23 February 2026 20:29:46 +0000 (0:00:01.758) 0:00:18.321 ******* 2026-02-23 20:40:00.730130 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.730134 | orchestrator | 2026-02-23 20:40:00.730153 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-23 20:40:00.730158 | orchestrator | Monday 23 February 2026 20:29:47 +0000 (0:00:00.767) 0:00:19.088 ******* 2026-02-23 20:40:00.730163 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730166 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730169 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730172 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730175 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730179 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730182 | orchestrator | 2026-02-23 20:40:00.730185 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-23 20:40:00.730188 | orchestrator | Monday 23 February 2026 20:29:48 +0000 (0:00:01.787) 0:00:20.877 ******* 2026-02-23 20:40:00.730191 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730194 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730197 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730200 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730203 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730206 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730210 | orchestrator | 2026-02-23 20:40:00.730213 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-23 20:40:00.730216 | orchestrator | Monday 23 February 2026 20:29:51 +0000 (0:00:02.498) 0:00:23.376 ******* 2026-02-23 20:40:00.730219 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730222 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730225 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730228 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730231 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730234 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730237 | orchestrator | 2026-02-23 20:40:00.730241 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-23 20:40:00.730244 | orchestrator | Monday 23 February 2026 20:29:52 +0000 (0:00:01.291) 0:00:24.667 ******* 2026-02-23 20:40:00.730247 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730250 | orchestrator | 2026-02-23 20:40:00.730253 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-23 20:40:00.730256 | orchestrator | Monday 23 February 2026 20:29:52 +0000 (0:00:00.220) 0:00:24.888 ******* 2026-02-23 20:40:00.730259 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730262 | orchestrator | 2026-02-23 20:40:00.730266 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-23 20:40:00.730269 | orchestrator | Monday 23 February 2026 20:29:53 +0000 (0:00:00.293) 0:00:25.182 ******* 2026-02-23 20:40:00.730272 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730275 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730487 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730502 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730506 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730510 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730513 | orchestrator | 2026-02-23 20:40:00.730517 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-23 20:40:00.730525 | orchestrator | Monday 23 February 2026 20:29:54 +0000 (0:00:00.805) 0:00:25.987 ******* 2026-02-23 20:40:00.730528 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730532 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730536 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730539 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730544 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730549 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730555 | orchestrator | 2026-02-23 20:40:00.730560 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-23 20:40:00.730565 | orchestrator | Monday 23 February 2026 20:29:54 +0000 (0:00:00.872) 0:00:26.859 ******* 2026-02-23 20:40:00.730570 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730576 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730581 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730587 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730592 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730598 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730602 | orchestrator | 2026-02-23 20:40:00.730606 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-23 20:40:00.730609 | orchestrator | Monday 23 February 2026 20:29:55 +0000 (0:00:00.802) 0:00:27.661 ******* 2026-02-23 20:40:00.730613 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730616 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730620 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730623 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730627 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730630 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730634 | orchestrator | 2026-02-23 20:40:00.730637 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-23 20:40:00.730641 | orchestrator | Monday 23 February 2026 20:29:56 +0000 (0:00:01.083) 0:00:28.744 ******* 2026-02-23 20:40:00.730644 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730648 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730651 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730655 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730658 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730662 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730665 | orchestrator | 2026-02-23 20:40:00.730669 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-23 20:40:00.730672 | orchestrator | Monday 23 February 2026 20:29:57 +0000 (0:00:00.559) 0:00:29.304 ******* 2026-02-23 20:40:00.730676 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730680 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730683 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730687 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730691 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730694 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730698 | orchestrator | 2026-02-23 20:40:00.730702 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-23 20:40:00.730706 | orchestrator | Monday 23 February 2026 20:29:58 +0000 (0:00:00.928) 0:00:30.232 ******* 2026-02-23 20:40:00.730709 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.730712 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.730715 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.730718 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.730721 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.730727 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.730730 | orchestrator | 2026-02-23 20:40:00.730733 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-23 20:40:00.730736 | orchestrator | Monday 23 February 2026 20:29:59 +0000 (0:00:01.188) 0:00:31.420 ******* 2026-02-23 20:40:00.730743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a', 'dm-uuid-LVM-Nkdbq1LawE0ReTPXUhLEG2R6QcqUR8xkbZwCH11QO4HjCHQ5LicCUX5XTJzN8kYs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219', 'dm-uuid-LVM-YvzUYwl1JcgvAVAxZPXLVxr4EUsrX9IXRdgt6Zmazq2UbzYqEvTBDrHdTHFQrMcI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc', 'dm-uuid-LVM-tNTgI5saESAg1nCaqlR9MKL12ZN7k9vkHltVmwqAKhddPI6QQrGkT5rHExewoTVN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0', 'dm-uuid-LVM-WbJ6jQDFuG0eiw2AvPnFKwfTGKyQ1HsOQRbFUmDP0Q2qMZ3x45u1GgjgnrCRetLP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.730817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWooiD-b2sR-6z2q-VQbq-mprP-2EvV-5aCXVl', 'scsi-0QEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da', 'scsi-SQEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.730825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rP690M-DyCY-S28R-pawf-vdDl-Z4lr-xzSgcm', 'scsi-0QEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4', 'scsi-SQEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.730839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9', 'scsi-SQEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.730846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.730857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.730864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731121 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.731185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUu12U-JDUP-t2xn-uCMW-K73I-fPdl-uxTrzn', 'scsi-0QEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654', 'scsi-SQEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qkizZk-oM4M-RBqR-pMYN-1wz2-ne3V-Umx5TF', 'scsi-0QEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21', 'scsi-SQEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a', 'scsi-SQEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731270 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.731276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3', 'dm-uuid-LVM-Q2vb73DEBCSwr8JaoWS0rafAX3qiDw9sRjd7guIeqHPd9UbUJ7MgMqoZxEcSOT30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3', 'dm-uuid-LVM-3i5Fx08Tjohflg5YMo1Pt9tGnn1Rd0joU0KqPjJX2RWrTukQnGeSg2Gldy81ePsb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part1', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part14', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part15', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part16', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-47-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tO5Pdb-Nt1e-J0M6-cyyf-vfOr-lT2b-22Z1Ke', 'scsi-0QEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163', 'scsi-SQEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731448 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.731451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7HvTuf-uFA6-YHez-MQb3-c5bY-QZgC-UWVWGV', 'scsi-0QEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33', 'scsi-SQEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0', 'scsi-SQEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731535 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.731540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:40:00.731783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part1', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part14', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part15', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part16', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731792 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.731804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:40:00.731808 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.731813 | orchestrator | 2026-02-23 20:40:00.731817 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-23 20:40:00.731820 | orchestrator | Monday 23 February 2026 20:30:01 +0000 (0:00:01.692) 0:00:33.113 ******* 2026-02-23 20:40:00.731843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a', 'dm-uuid-LVM-Nkdbq1LawE0ReTPXUhLEG2R6QcqUR8xkbZwCH11QO4HjCHQ5LicCUX5XTJzN8kYs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219', 'dm-uuid-LVM-YvzUYwl1JcgvAVAxZPXLVxr4EUsrX9IXRdgt6Zmazq2UbzYqEvTBDrHdTHFQrMcI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc', 'dm-uuid-LVM-tNTgI5saESAg1nCaqlR9MKL12ZN7k9vkHltVmwqAKhddPI6QQrGkT5rHExewoTVN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0', 'dm-uuid-LVM-WbJ6jQDFuG0eiw2AvPnFKwfTGKyQ1HsOQRbFUmDP0Q2qMZ3x45u1GgjgnrCRetLP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.731979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUu12U-JDUP-t2xn-uCMW-K73I-fPdl-uxTrzn', 'scsi-0QEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654', 'scsi-SQEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732032 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qkizZk-oM4M-RBqR-pMYN-1wz2-ne3V-Umx5TF', 'scsi-0QEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21', 'scsi-SQEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3', 'dm-uuid-LVM-Q2vb73DEBCSwr8JaoWS0rafAX3qiDw9sRjd7guIeqHPd9UbUJ7MgMqoZxEcSOT30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3', 'dm-uuid-LVM-3i5Fx08Tjohflg5YMo1Pt9tGnn1Rd0joU0KqPjJX2RWrTukQnGeSg2Gldy81ePsb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a', 'scsi-SQEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732100 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWooiD-b2sR-6z2q-VQbq-mprP-2EvV-5aCXVl', 'scsi-0QEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da', 'scsi-SQEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732275 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732289 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rP690M-DyCY-S28R-pawf-vdDl-Z4lr-xzSgcm', 'scsi-0QEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4', 'scsi-SQEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732327 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732332 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732335 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9', 'scsi-SQEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732402 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732410 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732425 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732437 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732442 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.732447 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part1', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part14', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part15', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part16', 'scsi-SQEMU_QEMU_HARDDISK_51cc313f-67b3-4692-9983-b1d477fcfc79-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-47-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tO5Pdb-Nt1e-J0M6-cyyf-vfOr-lT2b-22Z1Ke', 'scsi-0QEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163', 'scsi-SQEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732474 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732490 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732494 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732497 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732500 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732505 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732511 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.732523 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e763b21-c0db-4257-a8de-dc54d7c7ac08-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732527 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732530 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.732533 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.732537 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732541 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732582 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7HvTuf-uFA6-YHez-MQb3-c5bY-QZgC-UWVWGV', 'scsi-0QEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33', 'scsi-SQEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732599 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732603 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732606 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732616 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732619 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0', 'scsi-SQEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732636 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part1', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part14', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part15', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part16', 'scsi-SQEMU_QEMU_HARDDISK_eef1630a-c3ac-45bb-907d-b74eee84efee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732643 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732646 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.732649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:40:00.732653 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.732656 | orchestrator | 2026-02-23 20:40:00.732668 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-23 20:40:00.732801 | orchestrator | Monday 23 February 2026 20:30:02 +0000 (0:00:01.676) 0:00:34.790 ******* 2026-02-23 20:40:00.732805 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.732809 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.732812 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.732815 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.732818 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.732821 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.732824 | orchestrator | 2026-02-23 20:40:00.732827 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-23 20:40:00.732830 | orchestrator | Monday 23 February 2026 20:30:04 +0000 (0:00:02.044) 0:00:36.834 ******* 2026-02-23 20:40:00.732833 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.732836 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.732839 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.732842 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.732845 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.732848 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.732884 | orchestrator | 2026-02-23 20:40:00.732888 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-23 20:40:00.732892 | orchestrator | Monday 23 February 2026 20:30:05 +0000 (0:00:00.988) 0:00:37.822 ******* 2026-02-23 20:40:00.732949 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.732953 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.732956 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.732960 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.732963 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.732966 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.732970 | orchestrator | 2026-02-23 20:40:00.732975 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-23 20:40:00.732985 | orchestrator | Monday 23 February 2026 20:30:06 +0000 (0:00:00.861) 0:00:38.684 ******* 2026-02-23 20:40:00.732990 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.732995 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733001 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733005 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733010 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733015 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733021 | orchestrator | 2026-02-23 20:40:00.733026 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-23 20:40:00.733031 | orchestrator | Monday 23 February 2026 20:30:07 +0000 (0:00:00.587) 0:00:39.271 ******* 2026-02-23 20:40:00.733036 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733042 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733045 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733048 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733052 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733055 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733058 | orchestrator | 2026-02-23 20:40:00.733061 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-23 20:40:00.733064 | orchestrator | Monday 23 February 2026 20:30:08 +0000 (0:00:01.075) 0:00:40.347 ******* 2026-02-23 20:40:00.733067 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733070 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733073 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733076 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733079 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733083 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733088 | orchestrator | 2026-02-23 20:40:00.733093 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-23 20:40:00.733099 | orchestrator | Monday 23 February 2026 20:30:09 +0000 (0:00:00.772) 0:00:41.119 ******* 2026-02-23 20:40:00.733108 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-23 20:40:00.733114 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-23 20:40:00.733119 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-23 20:40:00.733124 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-23 20:40:00.733130 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-23 20:40:00.733136 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-23 20:40:00.733157 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-23 20:40:00.733163 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-23 20:40:00.733168 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-23 20:40:00.733173 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-23 20:40:00.733178 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-23 20:40:00.733184 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-23 20:40:00.733189 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-23 20:40:00.733194 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-23 20:40:00.733200 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-23 20:40:00.733205 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-23 20:40:00.733210 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-23 20:40:00.733216 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-23 20:40:00.733221 | orchestrator | 2026-02-23 20:40:00.733225 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-23 20:40:00.733228 | orchestrator | Monday 23 February 2026 20:30:13 +0000 (0:00:04.233) 0:00:45.353 ******* 2026-02-23 20:40:00.733231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-23 20:40:00.733234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-23 20:40:00.733241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-23 20:40:00.733244 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733248 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-23 20:40:00.733251 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-23 20:40:00.733254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-23 20:40:00.733257 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733260 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-23 20:40:00.733297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-23 20:40:00.733302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-23 20:40:00.733307 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-23 20:40:00.733317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-23 20:40:00.733322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-23 20:40:00.733328 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-23 20:40:00.733333 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-23 20:40:00.733339 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-23 20:40:00.733343 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733349 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733352 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-23 20:40:00.733356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-23 20:40:00.733359 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-23 20:40:00.733362 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733365 | orchestrator | 2026-02-23 20:40:00.733368 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-23 20:40:00.733371 | orchestrator | Monday 23 February 2026 20:30:14 +0000 (0:00:01.166) 0:00:46.520 ******* 2026-02-23 20:40:00.733374 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733377 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733381 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733384 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.733387 | orchestrator | 2026-02-23 20:40:00.733390 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-23 20:40:00.733394 | orchestrator | Monday 23 February 2026 20:30:16 +0000 (0:00:01.442) 0:00:47.962 ******* 2026-02-23 20:40:00.733397 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733400 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733403 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733406 | orchestrator | 2026-02-23 20:40:00.733409 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-23 20:40:00.733413 | orchestrator | Monday 23 February 2026 20:30:16 +0000 (0:00:00.774) 0:00:48.737 ******* 2026-02-23 20:40:00.733416 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733419 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733422 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733426 | orchestrator | 2026-02-23 20:40:00.733431 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-23 20:40:00.733436 | orchestrator | Monday 23 February 2026 20:30:17 +0000 (0:00:00.745) 0:00:49.483 ******* 2026-02-23 20:40:00.733441 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733447 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733452 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733457 | orchestrator | 2026-02-23 20:40:00.733463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-23 20:40:00.733468 | orchestrator | Monday 23 February 2026 20:30:18 +0000 (0:00:01.229) 0:00:50.712 ******* 2026-02-23 20:40:00.733476 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.733479 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.733483 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.733486 | orchestrator | 2026-02-23 20:40:00.733491 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-23 20:40:00.733494 | orchestrator | Monday 23 February 2026 20:30:19 +0000 (0:00:00.981) 0:00:51.693 ******* 2026-02-23 20:40:00.733498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.733501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.733504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.733507 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733510 | orchestrator | 2026-02-23 20:40:00.733513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-23 20:40:00.733516 | orchestrator | Monday 23 February 2026 20:30:20 +0000 (0:00:00.677) 0:00:52.371 ******* 2026-02-23 20:40:00.733519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.733522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.733525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.733528 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733531 | orchestrator | 2026-02-23 20:40:00.733534 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-23 20:40:00.733538 | orchestrator | Monday 23 February 2026 20:30:21 +0000 (0:00:00.725) 0:00:53.096 ******* 2026-02-23 20:40:00.733541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.733544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.733547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.733550 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733553 | orchestrator | 2026-02-23 20:40:00.733556 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-23 20:40:00.733559 | orchestrator | Monday 23 February 2026 20:30:21 +0000 (0:00:00.488) 0:00:53.585 ******* 2026-02-23 20:40:00.733562 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.733565 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.733568 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.733572 | orchestrator | 2026-02-23 20:40:00.733575 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-23 20:40:00.733578 | orchestrator | Monday 23 February 2026 20:30:22 +0000 (0:00:00.597) 0:00:54.183 ******* 2026-02-23 20:40:00.733582 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-23 20:40:00.733588 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-23 20:40:00.733620 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-23 20:40:00.733626 | orchestrator | 2026-02-23 20:40:00.733629 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-23 20:40:00.733633 | orchestrator | Monday 23 February 2026 20:30:23 +0000 (0:00:01.514) 0:00:55.698 ******* 2026-02-23 20:40:00.733636 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:40:00.733639 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:40:00.733642 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:40:00.733645 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-23 20:40:00.733648 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-23 20:40:00.733651 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-23 20:40:00.733655 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-23 20:40:00.733658 | orchestrator | 2026-02-23 20:40:00.733661 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-23 20:40:00.733668 | orchestrator | Monday 23 February 2026 20:30:24 +0000 (0:00:00.837) 0:00:56.536 ******* 2026-02-23 20:40:00.733671 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:40:00.733674 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:40:00.733677 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:40:00.733680 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-23 20:40:00.733683 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-23 20:40:00.733686 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-23 20:40:00.733689 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-23 20:40:00.733692 | orchestrator | 2026-02-23 20:40:00.733695 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.733699 | orchestrator | Monday 23 February 2026 20:30:27 +0000 (0:00:02.452) 0:00:58.988 ******* 2026-02-23 20:40:00.733702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.733706 | orchestrator | 2026-02-23 20:40:00.733709 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.733712 | orchestrator | Monday 23 February 2026 20:30:28 +0000 (0:00:01.259) 0:01:00.248 ******* 2026-02-23 20:40:00.733715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.733719 | orchestrator | 2026-02-23 20:40:00.733722 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.733727 | orchestrator | Monday 23 February 2026 20:30:29 +0000 (0:00:01.089) 0:01:01.337 ******* 2026-02-23 20:40:00.733730 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733733 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733737 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733740 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.733743 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.733746 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.733749 | orchestrator | 2026-02-23 20:40:00.733752 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.733755 | orchestrator | Monday 23 February 2026 20:30:30 +0000 (0:00:01.200) 0:01:02.538 ******* 2026-02-23 20:40:00.733758 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733761 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733765 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733768 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.733771 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.733774 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.733777 | orchestrator | 2026-02-23 20:40:00.733780 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.733783 | orchestrator | Monday 23 February 2026 20:30:31 +0000 (0:00:00.832) 0:01:03.370 ******* 2026-02-23 20:40:00.733786 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733789 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.733792 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.733795 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733798 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.733802 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733807 | orchestrator | 2026-02-23 20:40:00.733813 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.733818 | orchestrator | Monday 23 February 2026 20:30:32 +0000 (0:00:00.888) 0:01:04.259 ******* 2026-02-23 20:40:00.733823 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733831 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733835 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733840 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.733845 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.733849 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.733855 | orchestrator | 2026-02-23 20:40:00.733860 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.733866 | orchestrator | Monday 23 February 2026 20:30:33 +0000 (0:00:00.682) 0:01:04.942 ******* 2026-02-23 20:40:00.733871 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733877 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733880 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733883 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.733886 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.733903 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.733908 | orchestrator | 2026-02-23 20:40:00.733914 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.733918 | orchestrator | Monday 23 February 2026 20:30:34 +0000 (0:00:01.211) 0:01:06.154 ******* 2026-02-23 20:40:00.733923 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733928 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733933 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733937 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733942 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733948 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.733954 | orchestrator | 2026-02-23 20:40:00.733960 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.733965 | orchestrator | Monday 23 February 2026 20:30:34 +0000 (0:00:00.561) 0:01:06.716 ******* 2026-02-23 20:40:00.733972 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.733978 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.733983 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.733988 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.733993 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.733998 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734003 | orchestrator | 2026-02-23 20:40:00.734008 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.734033 | orchestrator | Monday 23 February 2026 20:30:35 +0000 (0:00:00.738) 0:01:07.454 ******* 2026-02-23 20:40:00.734039 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734045 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734050 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734056 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.734061 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.734067 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.734072 | orchestrator | 2026-02-23 20:40:00.734077 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.734083 | orchestrator | Monday 23 February 2026 20:30:36 +0000 (0:00:01.119) 0:01:08.574 ******* 2026-02-23 20:40:00.734088 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734094 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734100 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734105 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.734110 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.734116 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.734122 | orchestrator | 2026-02-23 20:40:00.734131 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.734147 | orchestrator | Monday 23 February 2026 20:30:37 +0000 (0:00:01.249) 0:01:09.823 ******* 2026-02-23 20:40:00.734153 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734158 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734163 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734169 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734174 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734179 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734194 | orchestrator | 2026-02-23 20:40:00.734200 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.734206 | orchestrator | Monday 23 February 2026 20:30:38 +0000 (0:00:00.539) 0:01:10.363 ******* 2026-02-23 20:40:00.734214 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734221 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734227 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734233 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.734240 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.734248 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.734253 | orchestrator | 2026-02-23 20:40:00.734261 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.734274 | orchestrator | Monday 23 February 2026 20:30:39 +0000 (0:00:00.722) 0:01:11.086 ******* 2026-02-23 20:40:00.734279 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734287 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734294 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734300 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734306 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734314 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734321 | orchestrator | 2026-02-23 20:40:00.734326 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.734333 | orchestrator | Monday 23 February 2026 20:30:39 +0000 (0:00:00.532) 0:01:11.619 ******* 2026-02-23 20:40:00.734339 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734345 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734351 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734357 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734363 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734369 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734374 | orchestrator | 2026-02-23 20:40:00.734381 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.734387 | orchestrator | Monday 23 February 2026 20:30:40 +0000 (0:00:00.663) 0:01:12.282 ******* 2026-02-23 20:40:00.734392 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734397 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734401 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734406 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734411 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734416 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734421 | orchestrator | 2026-02-23 20:40:00.734426 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.734430 | orchestrator | Monday 23 February 2026 20:30:40 +0000 (0:00:00.554) 0:01:12.837 ******* 2026-02-23 20:40:00.734435 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734441 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734446 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734451 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734455 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734460 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734465 | orchestrator | 2026-02-23 20:40:00.734471 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.734478 | orchestrator | Monday 23 February 2026 20:30:41 +0000 (0:00:00.672) 0:01:13.509 ******* 2026-02-23 20:40:00.734483 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734488 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734493 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734499 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734532 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734539 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734544 | orchestrator | 2026-02-23 20:40:00.734551 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.734556 | orchestrator | Monday 23 February 2026 20:30:42 +0000 (0:00:00.537) 0:01:14.047 ******* 2026-02-23 20:40:00.734567 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734573 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734577 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734582 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.734587 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.734592 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.734599 | orchestrator | 2026-02-23 20:40:00.734605 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.734611 | orchestrator | Monday 23 February 2026 20:30:42 +0000 (0:00:00.661) 0:01:14.708 ******* 2026-02-23 20:40:00.734617 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734623 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734629 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734634 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.734638 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.734643 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.734649 | orchestrator | 2026-02-23 20:40:00.734655 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.734660 | orchestrator | Monday 23 February 2026 20:30:43 +0000 (0:00:00.566) 0:01:15.275 ******* 2026-02-23 20:40:00.734665 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.734670 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.734676 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.734681 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.734686 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.734691 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.734696 | orchestrator | 2026-02-23 20:40:00.734701 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-23 20:40:00.734706 | orchestrator | Monday 23 February 2026 20:30:44 +0000 (0:00:01.176) 0:01:16.452 ******* 2026-02-23 20:40:00.734712 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.734715 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.734718 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.734721 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.734725 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.734728 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.734731 | orchestrator | 2026-02-23 20:40:00.734734 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-23 20:40:00.734737 | orchestrator | Monday 23 February 2026 20:30:46 +0000 (0:00:01.779) 0:01:18.232 ******* 2026-02-23 20:40:00.734740 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.734744 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.734747 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.734750 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.734753 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.734756 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.734759 | orchestrator | 2026-02-23 20:40:00.734762 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-23 20:40:00.734765 | orchestrator | Monday 23 February 2026 20:30:48 +0000 (0:00:02.453) 0:01:20.685 ******* 2026-02-23 20:40:00.734769 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.734773 | orchestrator | 2026-02-23 20:40:00.734776 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-23 20:40:00.734784 | orchestrator | Monday 23 February 2026 20:30:50 +0000 (0:00:01.762) 0:01:22.447 ******* 2026-02-23 20:40:00.734787 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734790 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734794 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734799 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734804 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734808 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734813 | orchestrator | 2026-02-23 20:40:00.734823 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-23 20:40:00.734827 | orchestrator | Monday 23 February 2026 20:30:51 +0000 (0:00:00.713) 0:01:23.161 ******* 2026-02-23 20:40:00.734832 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734837 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734842 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734846 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734851 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734856 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.734861 | orchestrator | 2026-02-23 20:40:00.734866 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-23 20:40:00.734871 | orchestrator | Monday 23 February 2026 20:30:52 +0000 (0:00:00.864) 0:01:24.026 ******* 2026-02-23 20:40:00.734876 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-23 20:40:00.734881 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-23 20:40:00.734886 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-23 20:40:00.734891 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-23 20:40:00.734897 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-23 20:40:00.734900 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-23 20:40:00.734903 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-23 20:40:00.734906 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-23 20:40:00.734910 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-23 20:40:00.734913 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-23 20:40:00.734938 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-23 20:40:00.734942 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-23 20:40:00.734945 | orchestrator | 2026-02-23 20:40:00.734948 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-23 20:40:00.734951 | orchestrator | Monday 23 February 2026 20:30:53 +0000 (0:00:01.395) 0:01:25.422 ******* 2026-02-23 20:40:00.734955 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.734958 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.734961 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.734964 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.734967 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.734970 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.734973 | orchestrator | 2026-02-23 20:40:00.734976 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-23 20:40:00.734979 | orchestrator | Monday 23 February 2026 20:30:54 +0000 (0:00:01.241) 0:01:26.663 ******* 2026-02-23 20:40:00.734983 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.734986 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.734989 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.734992 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.734995 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.734998 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735001 | orchestrator | 2026-02-23 20:40:00.735004 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-23 20:40:00.735008 | orchestrator | Monday 23 February 2026 20:30:55 +0000 (0:00:00.570) 0:01:27.233 ******* 2026-02-23 20:40:00.735011 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735014 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735017 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735020 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735023 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735029 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735032 | orchestrator | 2026-02-23 20:40:00.735035 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-23 20:40:00.735038 | orchestrator | Monday 23 February 2026 20:30:56 +0000 (0:00:00.865) 0:01:28.099 ******* 2026-02-23 20:40:00.735041 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735044 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735047 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735051 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735054 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735057 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735060 | orchestrator | 2026-02-23 20:40:00.735063 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-23 20:40:00.735066 | orchestrator | Monday 23 February 2026 20:30:56 +0000 (0:00:00.621) 0:01:28.720 ******* 2026-02-23 20:40:00.735069 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.735073 | orchestrator | 2026-02-23 20:40:00.735076 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-23 20:40:00.735079 | orchestrator | Monday 23 February 2026 20:30:58 +0000 (0:00:01.534) 0:01:30.254 ******* 2026-02-23 20:40:00.735082 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.735085 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.735088 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.735091 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.735094 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.735100 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.735103 | orchestrator | 2026-02-23 20:40:00.735106 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-23 20:40:00.735109 | orchestrator | Monday 23 February 2026 20:31:35 +0000 (0:00:37.043) 0:02:07.298 ******* 2026-02-23 20:40:00.735112 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-23 20:40:00.735116 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-23 20:40:00.735119 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-23 20:40:00.735122 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735125 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-23 20:40:00.735128 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-23 20:40:00.735131 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-23 20:40:00.735134 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735164 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-23 20:40:00.735169 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-23 20:40:00.735172 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-23 20:40:00.735175 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735178 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-23 20:40:00.735181 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-23 20:40:00.735184 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-23 20:40:00.735187 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735191 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-23 20:40:00.735194 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-23 20:40:00.735197 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-23 20:40:00.735200 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735218 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-23 20:40:00.735222 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-23 20:40:00.735225 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-23 20:40:00.735228 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735231 | orchestrator | 2026-02-23 20:40:00.735234 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-23 20:40:00.735238 | orchestrator | Monday 23 February 2026 20:31:36 +0000 (0:00:00.689) 0:02:07.987 ******* 2026-02-23 20:40:00.735241 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735244 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735247 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735250 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735253 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735256 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735259 | orchestrator | 2026-02-23 20:40:00.735263 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-23 20:40:00.735266 | orchestrator | Monday 23 February 2026 20:31:36 +0000 (0:00:00.707) 0:02:08.695 ******* 2026-02-23 20:40:00.735269 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735272 | orchestrator | 2026-02-23 20:40:00.735275 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-23 20:40:00.735278 | orchestrator | Monday 23 February 2026 20:31:36 +0000 (0:00:00.117) 0:02:08.813 ******* 2026-02-23 20:40:00.735282 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735285 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735288 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735291 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735294 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735297 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735300 | orchestrator | 2026-02-23 20:40:00.735303 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-23 20:40:00.735306 | orchestrator | Monday 23 February 2026 20:31:37 +0000 (0:00:00.536) 0:02:09.349 ******* 2026-02-23 20:40:00.735310 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735313 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735316 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735319 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735322 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735325 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735328 | orchestrator | 2026-02-23 20:40:00.735331 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-23 20:40:00.735334 | orchestrator | Monday 23 February 2026 20:31:38 +0000 (0:00:00.751) 0:02:10.101 ******* 2026-02-23 20:40:00.735338 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735341 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735344 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735347 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735350 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735353 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735356 | orchestrator | 2026-02-23 20:40:00.735359 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-23 20:40:00.735362 | orchestrator | Monday 23 February 2026 20:31:38 +0000 (0:00:00.633) 0:02:10.734 ******* 2026-02-23 20:40:00.735365 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.735369 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.735372 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.735375 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.735378 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.735381 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.735384 | orchestrator | 2026-02-23 20:40:00.735389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-23 20:40:00.735395 | orchestrator | Monday 23 February 2026 20:31:42 +0000 (0:00:03.870) 0:02:14.605 ******* 2026-02-23 20:40:00.735398 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.735401 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.735404 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.735407 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.735410 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.735413 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.735416 | orchestrator | 2026-02-23 20:40:00.735420 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-23 20:40:00.735423 | orchestrator | Monday 23 February 2026 20:31:43 +0000 (0:00:00.571) 0:02:15.176 ******* 2026-02-23 20:40:00.735426 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.735430 | orchestrator | 2026-02-23 20:40:00.735433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-23 20:40:00.735436 | orchestrator | Monday 23 February 2026 20:31:44 +0000 (0:00:01.075) 0:02:16.252 ******* 2026-02-23 20:40:00.735439 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735442 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735445 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735449 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735452 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735455 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735458 | orchestrator | 2026-02-23 20:40:00.735461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-23 20:40:00.735464 | orchestrator | Monday 23 February 2026 20:31:45 +0000 (0:00:00.775) 0:02:17.028 ******* 2026-02-23 20:40:00.735467 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735470 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735473 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735476 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735479 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735483 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735486 | orchestrator | 2026-02-23 20:40:00.735489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-23 20:40:00.735492 | orchestrator | Monday 23 February 2026 20:31:45 +0000 (0:00:00.653) 0:02:17.682 ******* 2026-02-23 20:40:00.735495 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735498 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735514 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735519 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735524 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735529 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735534 | orchestrator | 2026-02-23 20:40:00.735540 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-23 20:40:00.735545 | orchestrator | Monday 23 February 2026 20:31:46 +0000 (0:00:00.758) 0:02:18.440 ******* 2026-02-23 20:40:00.735548 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735551 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735554 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735557 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735560 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735563 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735566 | orchestrator | 2026-02-23 20:40:00.735569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-23 20:40:00.735572 | orchestrator | Monday 23 February 2026 20:31:47 +0000 (0:00:00.677) 0:02:19.117 ******* 2026-02-23 20:40:00.735576 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735579 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735582 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735585 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735588 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735593 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735596 | orchestrator | 2026-02-23 20:40:00.735600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-23 20:40:00.735603 | orchestrator | Monday 23 February 2026 20:31:47 +0000 (0:00:00.663) 0:02:19.781 ******* 2026-02-23 20:40:00.735606 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735609 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735612 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735616 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735621 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735627 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735632 | orchestrator | 2026-02-23 20:40:00.735637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-23 20:40:00.735643 | orchestrator | Monday 23 February 2026 20:31:48 +0000 (0:00:00.493) 0:02:20.275 ******* 2026-02-23 20:40:00.735646 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735651 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735656 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735660 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735665 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735670 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735675 | orchestrator | 2026-02-23 20:40:00.735680 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-23 20:40:00.735685 | orchestrator | Monday 23 February 2026 20:31:49 +0000 (0:00:00.903) 0:02:21.178 ******* 2026-02-23 20:40:00.735690 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.735696 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.735701 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.735706 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.735711 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.735717 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.735722 | orchestrator | 2026-02-23 20:40:00.735727 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-23 20:40:00.735734 | orchestrator | Monday 23 February 2026 20:31:49 +0000 (0:00:00.573) 0:02:21.751 ******* 2026-02-23 20:40:00.735739 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.735745 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.735750 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.735755 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.735760 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.735769 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.735774 | orchestrator | 2026-02-23 20:40:00.735779 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-23 20:40:00.735784 | orchestrator | Monday 23 February 2026 20:31:51 +0000 (0:00:01.195) 0:02:22.947 ******* 2026-02-23 20:40:00.735788 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.735794 | orchestrator | 2026-02-23 20:40:00.735799 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-23 20:40:00.735804 | orchestrator | Monday 23 February 2026 20:31:52 +0000 (0:00:01.067) 0:02:24.015 ******* 2026-02-23 20:40:00.735810 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-23 20:40:00.735815 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-23 20:40:00.735820 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-23 20:40:00.735826 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-23 20:40:00.735831 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-23 20:40:00.735836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-23 20:40:00.735842 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-23 20:40:00.735847 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-23 20:40:00.735856 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-23 20:40:00.735862 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-23 20:40:00.735867 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-23 20:40:00.735872 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-23 20:40:00.735878 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-23 20:40:00.735883 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-23 20:40:00.735888 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-23 20:40:00.735894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-23 20:40:00.735899 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-23 20:40:00.735905 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-23 20:40:00.735931 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-23 20:40:00.735938 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-23 20:40:00.735944 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-23 20:40:00.735949 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-23 20:40:00.735955 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-23 20:40:00.735960 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-23 20:40:00.735966 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-23 20:40:00.735971 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-23 20:40:00.735976 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-23 20:40:00.735983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-23 20:40:00.735989 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-23 20:40:00.735994 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-23 20:40:00.735999 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-23 20:40:00.736005 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-23 20:40:00.736010 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-23 20:40:00.736015 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-23 20:40:00.736020 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-23 20:40:00.736026 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-23 20:40:00.736031 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-23 20:40:00.736037 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-23 20:40:00.736042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-23 20:40:00.736048 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-23 20:40:00.736053 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-23 20:40:00.736059 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-23 20:40:00.736064 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-23 20:40:00.736070 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-23 20:40:00.736075 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-23 20:40:00.736080 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-23 20:40:00.736086 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-23 20:40:00.736091 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-23 20:40:00.736097 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-23 20:40:00.736101 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-23 20:40:00.736107 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-23 20:40:00.736118 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-23 20:40:00.736124 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-23 20:40:00.736129 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-23 20:40:00.736167 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-23 20:40:00.736174 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-23 20:40:00.736180 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-23 20:40:00.736185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-23 20:40:00.736190 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-23 20:40:00.736196 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-23 20:40:00.736201 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-23 20:40:00.736206 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-23 20:40:00.736211 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-23 20:40:00.736217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-23 20:40:00.736222 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-23 20:40:00.736227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-23 20:40:00.736232 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-23 20:40:00.736237 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-23 20:40:00.736242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-23 20:40:00.736246 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-23 20:40:00.736252 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-23 20:40:00.736257 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-23 20:40:00.736262 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-23 20:40:00.736267 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-23 20:40:00.736271 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-23 20:40:00.736275 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-23 20:40:00.736299 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-23 20:40:00.736308 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-23 20:40:00.736313 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-23 20:40:00.736318 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-23 20:40:00.736323 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-23 20:40:00.736328 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-23 20:40:00.736333 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-23 20:40:00.736338 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-23 20:40:00.736343 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-23 20:40:00.736347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-23 20:40:00.736352 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-23 20:40:00.736357 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-23 20:40:00.736362 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-23 20:40:00.736367 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-23 20:40:00.736372 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-23 20:40:00.736377 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-23 20:40:00.736389 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-23 20:40:00.736395 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-23 20:40:00.736400 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-23 20:40:00.736404 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-23 20:40:00.736411 | orchestrator | 2026-02-23 20:40:00.736414 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-23 20:40:00.736417 | orchestrator | Monday 23 February 2026 20:31:59 +0000 (0:00:06.948) 0:02:30.964 ******* 2026-02-23 20:40:00.736420 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736423 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736427 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736430 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.736433 | orchestrator | 2026-02-23 20:40:00.736436 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-23 20:40:00.736439 | orchestrator | Monday 23 February 2026 20:32:00 +0000 (0:00:01.087) 0:02:32.051 ******* 2026-02-23 20:40:00.736442 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736446 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736449 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736452 | orchestrator | 2026-02-23 20:40:00.736455 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-23 20:40:00.736461 | orchestrator | Monday 23 February 2026 20:32:01 +0000 (0:00:01.025) 0:02:33.076 ******* 2026-02-23 20:40:00.736464 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736468 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736471 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736474 | orchestrator | 2026-02-23 20:40:00.736477 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-23 20:40:00.736480 | orchestrator | Monday 23 February 2026 20:32:02 +0000 (0:00:01.180) 0:02:34.256 ******* 2026-02-23 20:40:00.736483 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.736486 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.736489 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.736492 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736495 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736498 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736502 | orchestrator | 2026-02-23 20:40:00.736505 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-23 20:40:00.736508 | orchestrator | Monday 23 February 2026 20:32:03 +0000 (0:00:00.700) 0:02:34.956 ******* 2026-02-23 20:40:00.736511 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.736514 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.736517 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.736520 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736523 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736526 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736529 | orchestrator | 2026-02-23 20:40:00.736532 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-23 20:40:00.736535 | orchestrator | Monday 23 February 2026 20:32:04 +0000 (0:00:01.107) 0:02:36.063 ******* 2026-02-23 20:40:00.736541 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736544 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736547 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736550 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736553 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736556 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736559 | orchestrator | 2026-02-23 20:40:00.736578 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-23 20:40:00.736581 | orchestrator | Monday 23 February 2026 20:32:05 +0000 (0:00:01.099) 0:02:37.163 ******* 2026-02-23 20:40:00.736584 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736587 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736590 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736593 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736596 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736600 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736603 | orchestrator | 2026-02-23 20:40:00.736606 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-23 20:40:00.736609 | orchestrator | Monday 23 February 2026 20:32:06 +0000 (0:00:01.085) 0:02:38.248 ******* 2026-02-23 20:40:00.736612 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736615 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736618 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736621 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736624 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736627 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736630 | orchestrator | 2026-02-23 20:40:00.736633 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-23 20:40:00.736636 | orchestrator | Monday 23 February 2026 20:32:07 +0000 (0:00:01.066) 0:02:39.315 ******* 2026-02-23 20:40:00.736639 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736642 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736646 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736649 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736652 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736655 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736658 | orchestrator | 2026-02-23 20:40:00.736661 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-23 20:40:00.736664 | orchestrator | Monday 23 February 2026 20:32:08 +0000 (0:00:01.009) 0:02:40.325 ******* 2026-02-23 20:40:00.736667 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736670 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736673 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736676 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736679 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736683 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736688 | orchestrator | 2026-02-23 20:40:00.736692 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-23 20:40:00.736697 | orchestrator | Monday 23 February 2026 20:32:09 +0000 (0:00:00.755) 0:02:41.081 ******* 2026-02-23 20:40:00.736702 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736707 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736711 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736716 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736720 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736725 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736729 | orchestrator | 2026-02-23 20:40:00.736734 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-23 20:40:00.736738 | orchestrator | Monday 23 February 2026 20:32:10 +0000 (0:00:01.108) 0:02:42.189 ******* 2026-02-23 20:40:00.736742 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736746 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736759 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736764 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.736770 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.736775 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.736780 | orchestrator | 2026-02-23 20:40:00.736787 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-23 20:40:00.736791 | orchestrator | Monday 23 February 2026 20:32:13 +0000 (0:00:02.762) 0:02:44.952 ******* 2026-02-23 20:40:00.736794 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.736797 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.736800 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.736803 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736806 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736809 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736812 | orchestrator | 2026-02-23 20:40:00.736815 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-23 20:40:00.736818 | orchestrator | Monday 23 February 2026 20:32:14 +0000 (0:00:01.026) 0:02:45.979 ******* 2026-02-23 20:40:00.736822 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.736825 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.736828 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736831 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736834 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.736837 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736840 | orchestrator | 2026-02-23 20:40:00.736843 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-23 20:40:00.736846 | orchestrator | Monday 23 February 2026 20:32:14 +0000 (0:00:00.781) 0:02:46.761 ******* 2026-02-23 20:40:00.736849 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736852 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736855 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736858 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736862 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736865 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736868 | orchestrator | 2026-02-23 20:40:00.736871 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-23 20:40:00.736874 | orchestrator | Monday 23 February 2026 20:32:15 +0000 (0:00:00.855) 0:02:47.616 ******* 2026-02-23 20:40:00.736877 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736880 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736883 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.736886 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736903 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736906 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736909 | orchestrator | 2026-02-23 20:40:00.736913 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-23 20:40:00.736916 | orchestrator | Monday 23 February 2026 20:32:16 +0000 (0:00:00.726) 0:02:48.343 ******* 2026-02-23 20:40:00.736920 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-23 20:40:00.736924 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-23 20:40:00.736930 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736934 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-23 20:40:00.736937 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-23 20:40:00.736940 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736943 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-23 20:40:00.736947 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-23 20:40:00.736950 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736953 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736956 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736959 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736962 | orchestrator | 2026-02-23 20:40:00.736967 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-23 20:40:00.736970 | orchestrator | Monday 23 February 2026 20:32:17 +0000 (0:00:00.748) 0:02:49.091 ******* 2026-02-23 20:40:00.736973 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.736976 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.736980 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.736983 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.736986 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.736989 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.736992 | orchestrator | 2026-02-23 20:40:00.736995 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-23 20:40:00.736998 | orchestrator | Monday 23 February 2026 20:32:17 +0000 (0:00:00.757) 0:02:49.849 ******* 2026-02-23 20:40:00.737001 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737004 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737007 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737010 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737014 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737017 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737020 | orchestrator | 2026-02-23 20:40:00.737023 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-23 20:40:00.737026 | orchestrator | Monday 23 February 2026 20:32:18 +0000 (0:00:00.703) 0:02:50.552 ******* 2026-02-23 20:40:00.737029 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737032 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737035 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737038 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737041 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737044 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737047 | orchestrator | 2026-02-23 20:40:00.737051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-23 20:40:00.737054 | orchestrator | Monday 23 February 2026 20:32:19 +0000 (0:00:00.740) 0:02:51.293 ******* 2026-02-23 20:40:00.737057 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737062 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737065 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737068 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737071 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737074 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737077 | orchestrator | 2026-02-23 20:40:00.737080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-23 20:40:00.737092 | orchestrator | Monday 23 February 2026 20:32:20 +0000 (0:00:01.079) 0:02:52.372 ******* 2026-02-23 20:40:00.737096 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737099 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737102 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737105 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737108 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737111 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737114 | orchestrator | 2026-02-23 20:40:00.737117 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-23 20:40:00.737120 | orchestrator | Monday 23 February 2026 20:32:21 +0000 (0:00:00.597) 0:02:52.969 ******* 2026-02-23 20:40:00.737124 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737127 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.737130 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.737133 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737136 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.737153 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737156 | orchestrator | 2026-02-23 20:40:00.737161 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-23 20:40:00.737166 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:00.961) 0:02:53.931 ******* 2026-02-23 20:40:00.737171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.737176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.737181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.737187 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737192 | orchestrator | 2026-02-23 20:40:00.737197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-23 20:40:00.737202 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:00.355) 0:02:54.287 ******* 2026-02-23 20:40:00.737206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.737209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.737212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.737216 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737219 | orchestrator | 2026-02-23 20:40:00.737222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-23 20:40:00.737225 | orchestrator | Monday 23 February 2026 20:32:22 +0000 (0:00:00.374) 0:02:54.661 ******* 2026-02-23 20:40:00.737228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.737231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.737234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.737237 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737240 | orchestrator | 2026-02-23 20:40:00.737243 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-23 20:40:00.737247 | orchestrator | Monday 23 February 2026 20:32:23 +0000 (0:00:00.433) 0:02:55.095 ******* 2026-02-23 20:40:00.737250 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.737253 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.737256 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.737259 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737262 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737265 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737268 | orchestrator | 2026-02-23 20:40:00.737274 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-23 20:40:00.737277 | orchestrator | Monday 23 February 2026 20:32:24 +0000 (0:00:01.093) 0:02:56.188 ******* 2026-02-23 20:40:00.737280 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-23 20:40:00.737283 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-23 20:40:00.737287 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-23 20:40:00.737290 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737293 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-23 20:40:00.737296 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737299 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-23 20:40:00.737302 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-23 20:40:00.737305 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737308 | orchestrator | 2026-02-23 20:40:00.737311 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-23 20:40:00.737314 | orchestrator | Monday 23 February 2026 20:32:27 +0000 (0:00:03.086) 0:02:59.274 ******* 2026-02-23 20:40:00.737318 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.737321 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.737324 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.737327 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.737330 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.737333 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.737336 | orchestrator | 2026-02-23 20:40:00.737339 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-23 20:40:00.737342 | orchestrator | Monday 23 February 2026 20:32:30 +0000 (0:00:03.245) 0:03:02.520 ******* 2026-02-23 20:40:00.737345 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.737349 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.737352 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.737355 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.737358 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.737361 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.737364 | orchestrator | 2026-02-23 20:40:00.737367 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-23 20:40:00.737370 | orchestrator | Monday 23 February 2026 20:32:31 +0000 (0:00:01.234) 0:03:03.754 ******* 2026-02-23 20:40:00.737373 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737376 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737379 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.737386 | orchestrator | 2026-02-23 20:40:00.737419 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-23 20:40:00.737445 | orchestrator | Monday 23 February 2026 20:32:32 +0000 (0:00:01.031) 0:03:04.786 ******* 2026-02-23 20:40:00.737451 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.737455 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.737459 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.737462 | orchestrator | 2026-02-23 20:40:00.737465 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-23 20:40:00.737468 | orchestrator | Monday 23 February 2026 20:32:33 +0000 (0:00:00.367) 0:03:05.154 ******* 2026-02-23 20:40:00.737472 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.737475 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.737478 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.737481 | orchestrator | 2026-02-23 20:40:00.737484 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-23 20:40:00.737487 | orchestrator | Monday 23 February 2026 20:32:34 +0000 (0:00:01.446) 0:03:06.600 ******* 2026-02-23 20:40:00.737491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-23 20:40:00.737494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-23 20:40:00.737500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-23 20:40:00.737503 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737506 | orchestrator | 2026-02-23 20:40:00.737510 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-23 20:40:00.737513 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:00.603) 0:03:07.204 ******* 2026-02-23 20:40:00.737516 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.737519 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.737523 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.737526 | orchestrator | 2026-02-23 20:40:00.737529 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-23 20:40:00.737532 | orchestrator | Monday 23 February 2026 20:32:35 +0000 (0:00:00.365) 0:03:07.569 ******* 2026-02-23 20:40:00.737535 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737538 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737541 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737545 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-02-23 20:40:00.737548 | orchestrator | 2026-02-23 20:40:00.737551 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-23 20:40:00.737554 | orchestrator | Monday 23 February 2026 20:32:36 +0000 (0:00:00.953) 0:03:08.522 ******* 2026-02-23 20:40:00.737557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.737560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.737564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.737567 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737570 | orchestrator | 2026-02-23 20:40:00.737573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-23 20:40:00.737576 | orchestrator | Monday 23 February 2026 20:32:36 +0000 (0:00:00.388) 0:03:08.911 ******* 2026-02-23 20:40:00.737579 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737582 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737586 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737589 | orchestrator | 2026-02-23 20:40:00.737592 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-23 20:40:00.737595 | orchestrator | Monday 23 February 2026 20:32:37 +0000 (0:00:00.291) 0:03:09.202 ******* 2026-02-23 20:40:00.737598 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737601 | orchestrator | 2026-02-23 20:40:00.737604 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-23 20:40:00.737609 | orchestrator | Monday 23 February 2026 20:32:37 +0000 (0:00:00.204) 0:03:09.407 ******* 2026-02-23 20:40:00.737612 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737616 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737619 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737622 | orchestrator | 2026-02-23 20:40:00.737625 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-23 20:40:00.737628 | orchestrator | Monday 23 February 2026 20:32:37 +0000 (0:00:00.297) 0:03:09.705 ******* 2026-02-23 20:40:00.737631 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737634 | orchestrator | 2026-02-23 20:40:00.737638 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-23 20:40:00.737641 | orchestrator | Monday 23 February 2026 20:32:37 +0000 (0:00:00.218) 0:03:09.923 ******* 2026-02-23 20:40:00.737644 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737647 | orchestrator | 2026-02-23 20:40:00.737650 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-23 20:40:00.737653 | orchestrator | Monday 23 February 2026 20:32:38 +0000 (0:00:00.192) 0:03:10.116 ******* 2026-02-23 20:40:00.737657 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737660 | orchestrator | 2026-02-23 20:40:00.737663 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-23 20:40:00.737668 | orchestrator | Monday 23 February 2026 20:32:38 +0000 (0:00:00.114) 0:03:10.231 ******* 2026-02-23 20:40:00.737671 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737674 | orchestrator | 2026-02-23 20:40:00.737678 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-23 20:40:00.737681 | orchestrator | Monday 23 February 2026 20:32:38 +0000 (0:00:00.645) 0:03:10.876 ******* 2026-02-23 20:40:00.737684 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737687 | orchestrator | 2026-02-23 20:40:00.737690 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-23 20:40:00.737693 | orchestrator | Monday 23 February 2026 20:32:39 +0000 (0:00:00.219) 0:03:11.095 ******* 2026-02-23 20:40:00.737696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.737700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.737703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.737706 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737709 | orchestrator | 2026-02-23 20:40:00.737712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-23 20:40:00.737725 | orchestrator | Monday 23 February 2026 20:32:39 +0000 (0:00:00.436) 0:03:11.531 ******* 2026-02-23 20:40:00.737729 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737732 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.737735 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.737738 | orchestrator | 2026-02-23 20:40:00.737741 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-23 20:40:00.737744 | orchestrator | Monday 23 February 2026 20:32:39 +0000 (0:00:00.300) 0:03:11.832 ******* 2026-02-23 20:40:00.737747 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737750 | orchestrator | 2026-02-23 20:40:00.737753 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-23 20:40:00.737757 | orchestrator | Monday 23 February 2026 20:32:40 +0000 (0:00:00.193) 0:03:12.026 ******* 2026-02-23 20:40:00.737760 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737765 | orchestrator | 2026-02-23 20:40:00.737771 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-23 20:40:00.737776 | orchestrator | Monday 23 February 2026 20:32:40 +0000 (0:00:00.201) 0:03:12.227 ******* 2026-02-23 20:40:00.737781 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737786 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737791 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.737800 | orchestrator | 2026-02-23 20:40:00.737805 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-23 20:40:00.737809 | orchestrator | Monday 23 February 2026 20:32:41 +0000 (0:00:01.043) 0:03:13.270 ******* 2026-02-23 20:40:00.737814 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.737819 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.737825 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.737830 | orchestrator | 2026-02-23 20:40:00.737835 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-23 20:40:00.737841 | orchestrator | Monday 23 February 2026 20:32:41 +0000 (0:00:00.373) 0:03:13.644 ******* 2026-02-23 20:40:00.737846 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.737852 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.737857 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.737862 | orchestrator | 2026-02-23 20:40:00.737867 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-23 20:40:00.737872 | orchestrator | Monday 23 February 2026 20:32:42 +0000 (0:00:01.196) 0:03:14.841 ******* 2026-02-23 20:40:00.737878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.737881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.737887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.737891 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.737894 | orchestrator | 2026-02-23 20:40:00.737899 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-23 20:40:00.737904 | orchestrator | Monday 23 February 2026 20:32:43 +0000 (0:00:00.767) 0:03:15.608 ******* 2026-02-23 20:40:00.737909 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.737914 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.737919 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.737924 | orchestrator | 2026-02-23 20:40:00.737928 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-23 20:40:00.737933 | orchestrator | Monday 23 February 2026 20:32:44 +0000 (0:00:00.432) 0:03:16.040 ******* 2026-02-23 20:40:00.737937 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.737941 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.737948 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.737953 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.737957 | orchestrator | 2026-02-23 20:40:00.737962 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-23 20:40:00.737966 | orchestrator | Monday 23 February 2026 20:32:44 +0000 (0:00:00.742) 0:03:16.783 ******* 2026-02-23 20:40:00.737971 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.737975 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.737980 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.737985 | orchestrator | 2026-02-23 20:40:00.737990 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-23 20:40:00.737996 | orchestrator | Monday 23 February 2026 20:32:45 +0000 (0:00:00.425) 0:03:17.209 ******* 2026-02-23 20:40:00.737999 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.738002 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.738005 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.738008 | orchestrator | 2026-02-23 20:40:00.738011 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-23 20:40:00.738043 | orchestrator | Monday 23 February 2026 20:32:46 +0000 (0:00:01.330) 0:03:18.539 ******* 2026-02-23 20:40:00.738047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.738050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.738053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.738056 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.738059 | orchestrator | 2026-02-23 20:40:00.738062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-23 20:40:00.738065 | orchestrator | Monday 23 February 2026 20:32:47 +0000 (0:00:00.564) 0:03:19.104 ******* 2026-02-23 20:40:00.738069 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.738072 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.738075 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.738078 | orchestrator | 2026-02-23 20:40:00.738081 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-23 20:40:00.738084 | orchestrator | Monday 23 February 2026 20:32:47 +0000 (0:00:00.283) 0:03:19.387 ******* 2026-02-23 20:40:00.738087 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.738090 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.738093 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.738096 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738099 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738118 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738122 | orchestrator | 2026-02-23 20:40:00.738125 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-23 20:40:00.738128 | orchestrator | Monday 23 February 2026 20:32:48 +0000 (0:00:00.713) 0:03:20.101 ******* 2026-02-23 20:40:00.738131 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.738135 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.738162 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.738166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.738170 | orchestrator | 2026-02-23 20:40:00.738173 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-23 20:40:00.738176 | orchestrator | Monday 23 February 2026 20:32:48 +0000 (0:00:00.747) 0:03:20.848 ******* 2026-02-23 20:40:00.738179 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738182 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738185 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738188 | orchestrator | 2026-02-23 20:40:00.738192 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-23 20:40:00.738195 | orchestrator | Monday 23 February 2026 20:32:49 +0000 (0:00:00.514) 0:03:21.363 ******* 2026-02-23 20:40:00.738198 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.738201 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.738204 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.738207 | orchestrator | 2026-02-23 20:40:00.738210 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-23 20:40:00.738214 | orchestrator | Monday 23 February 2026 20:32:50 +0000 (0:00:01.169) 0:03:22.532 ******* 2026-02-23 20:40:00.738217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-23 20:40:00.738221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-23 20:40:00.738226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-23 20:40:00.738232 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738237 | orchestrator | 2026-02-23 20:40:00.738242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-23 20:40:00.738247 | orchestrator | Monday 23 February 2026 20:32:51 +0000 (0:00:00.673) 0:03:23.206 ******* 2026-02-23 20:40:00.738252 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738255 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738258 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738261 | orchestrator | 2026-02-23 20:40:00.738264 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-23 20:40:00.738267 | orchestrator | 2026-02-23 20:40:00.738271 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.738274 | orchestrator | Monday 23 February 2026 20:32:51 +0000 (0:00:00.624) 0:03:23.830 ******* 2026-02-23 20:40:00.738277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.738281 | orchestrator | 2026-02-23 20:40:00.738284 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.738287 | orchestrator | Monday 23 February 2026 20:32:52 +0000 (0:00:00.914) 0:03:24.744 ******* 2026-02-23 20:40:00.738290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.738293 | orchestrator | 2026-02-23 20:40:00.738296 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.738303 | orchestrator | Monday 23 February 2026 20:32:53 +0000 (0:00:00.464) 0:03:25.209 ******* 2026-02-23 20:40:00.738307 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738310 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738313 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738316 | orchestrator | 2026-02-23 20:40:00.738319 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.738322 | orchestrator | Monday 23 February 2026 20:32:54 +0000 (0:00:00.796) 0:03:26.006 ******* 2026-02-23 20:40:00.738325 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738329 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738332 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738335 | orchestrator | 2026-02-23 20:40:00.738338 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.738344 | orchestrator | Monday 23 February 2026 20:32:54 +0000 (0:00:00.303) 0:03:26.310 ******* 2026-02-23 20:40:00.738347 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738350 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738353 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738356 | orchestrator | 2026-02-23 20:40:00.738359 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.738363 | orchestrator | Monday 23 February 2026 20:32:54 +0000 (0:00:00.286) 0:03:26.597 ******* 2026-02-23 20:40:00.738366 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738369 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738372 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738375 | orchestrator | 2026-02-23 20:40:00.738378 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.738381 | orchestrator | Monday 23 February 2026 20:32:54 +0000 (0:00:00.273) 0:03:26.870 ******* 2026-02-23 20:40:00.738385 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738388 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738391 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738394 | orchestrator | 2026-02-23 20:40:00.738397 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.738403 | orchestrator | Monday 23 February 2026 20:32:55 +0000 (0:00:00.839) 0:03:27.709 ******* 2026-02-23 20:40:00.738408 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738413 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738418 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738423 | orchestrator | 2026-02-23 20:40:00.738428 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.738433 | orchestrator | Monday 23 February 2026 20:32:56 +0000 (0:00:00.271) 0:03:27.981 ******* 2026-02-23 20:40:00.738456 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738463 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738469 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738474 | orchestrator | 2026-02-23 20:40:00.738479 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.738484 | orchestrator | Monday 23 February 2026 20:32:56 +0000 (0:00:00.267) 0:03:28.249 ******* 2026-02-23 20:40:00.738489 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738494 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738500 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738504 | orchestrator | 2026-02-23 20:40:00.738507 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.738510 | orchestrator | Monday 23 February 2026 20:32:56 +0000 (0:00:00.595) 0:03:28.845 ******* 2026-02-23 20:40:00.738513 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738516 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738519 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738522 | orchestrator | 2026-02-23 20:40:00.738526 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.738529 | orchestrator | Monday 23 February 2026 20:32:57 +0000 (0:00:00.775) 0:03:29.620 ******* 2026-02-23 20:40:00.738532 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738535 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738538 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738541 | orchestrator | 2026-02-23 20:40:00.738544 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.738547 | orchestrator | Monday 23 February 2026 20:32:57 +0000 (0:00:00.280) 0:03:29.901 ******* 2026-02-23 20:40:00.738550 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738553 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738556 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738559 | orchestrator | 2026-02-23 20:40:00.738564 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.738569 | orchestrator | Monday 23 February 2026 20:32:58 +0000 (0:00:00.285) 0:03:30.186 ******* 2026-02-23 20:40:00.738579 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738582 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738585 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738588 | orchestrator | 2026-02-23 20:40:00.738592 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.738595 | orchestrator | Monday 23 February 2026 20:32:58 +0000 (0:00:00.268) 0:03:30.454 ******* 2026-02-23 20:40:00.738598 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738601 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738604 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738607 | orchestrator | 2026-02-23 20:40:00.738610 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.738613 | orchestrator | Monday 23 February 2026 20:32:58 +0000 (0:00:00.262) 0:03:30.717 ******* 2026-02-23 20:40:00.738616 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738619 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738623 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738626 | orchestrator | 2026-02-23 20:40:00.738629 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.738632 | orchestrator | Monday 23 February 2026 20:32:59 +0000 (0:00:00.453) 0:03:31.170 ******* 2026-02-23 20:40:00.738635 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738638 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738641 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738644 | orchestrator | 2026-02-23 20:40:00.738647 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.738653 | orchestrator | Monday 23 February 2026 20:32:59 +0000 (0:00:00.256) 0:03:31.427 ******* 2026-02-23 20:40:00.738656 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738659 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.738662 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.738665 | orchestrator | 2026-02-23 20:40:00.738668 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.738671 | orchestrator | Monday 23 February 2026 20:32:59 +0000 (0:00:00.278) 0:03:31.705 ******* 2026-02-23 20:40:00.738674 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738678 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738681 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738684 | orchestrator | 2026-02-23 20:40:00.738687 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.738690 | orchestrator | Monday 23 February 2026 20:33:00 +0000 (0:00:00.338) 0:03:32.044 ******* 2026-02-23 20:40:00.738693 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738696 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738699 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738702 | orchestrator | 2026-02-23 20:40:00.738705 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.738710 | orchestrator | Monday 23 February 2026 20:33:00 +0000 (0:00:00.660) 0:03:32.704 ******* 2026-02-23 20:40:00.738715 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738721 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738725 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738730 | orchestrator | 2026-02-23 20:40:00.738736 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-23 20:40:00.738741 | orchestrator | Monday 23 February 2026 20:33:01 +0000 (0:00:00.538) 0:03:33.242 ******* 2026-02-23 20:40:00.738746 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738751 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738757 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738762 | orchestrator | 2026-02-23 20:40:00.738767 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-23 20:40:00.738770 | orchestrator | Monday 23 February 2026 20:33:01 +0000 (0:00:00.331) 0:03:33.574 ******* 2026-02-23 20:40:00.738773 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.738779 | orchestrator | 2026-02-23 20:40:00.738782 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-23 20:40:00.738785 | orchestrator | Monday 23 February 2026 20:33:02 +0000 (0:00:00.726) 0:03:34.300 ******* 2026-02-23 20:40:00.738788 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.738791 | orchestrator | 2026-02-23 20:40:00.738808 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-23 20:40:00.738812 | orchestrator | Monday 23 February 2026 20:33:02 +0000 (0:00:00.126) 0:03:34.427 ******* 2026-02-23 20:40:00.738815 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-23 20:40:00.738818 | orchestrator | 2026-02-23 20:40:00.738821 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-23 20:40:00.738824 | orchestrator | Monday 23 February 2026 20:33:03 +0000 (0:00:00.957) 0:03:35.384 ******* 2026-02-23 20:40:00.738827 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738830 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738834 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738837 | orchestrator | 2026-02-23 20:40:00.738840 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-23 20:40:00.738843 | orchestrator | Monday 23 February 2026 20:33:03 +0000 (0:00:00.298) 0:03:35.683 ******* 2026-02-23 20:40:00.738846 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738849 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738853 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738858 | orchestrator | 2026-02-23 20:40:00.738864 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-23 20:40:00.738869 | orchestrator | Monday 23 February 2026 20:33:04 +0000 (0:00:00.283) 0:03:35.966 ******* 2026-02-23 20:40:00.738873 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.738877 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.738880 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.738883 | orchestrator | 2026-02-23 20:40:00.738886 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-23 20:40:00.738890 | orchestrator | Monday 23 February 2026 20:33:05 +0000 (0:00:01.273) 0:03:37.240 ******* 2026-02-23 20:40:00.738895 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.738901 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.738906 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.738911 | orchestrator | 2026-02-23 20:40:00.738917 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-23 20:40:00.738922 | orchestrator | Monday 23 February 2026 20:33:06 +0000 (0:00:00.780) 0:03:38.020 ******* 2026-02-23 20:40:00.738928 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.738933 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.738938 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.738943 | orchestrator | 2026-02-23 20:40:00.738948 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-23 20:40:00.738951 | orchestrator | Monday 23 February 2026 20:33:06 +0000 (0:00:00.701) 0:03:38.722 ******* 2026-02-23 20:40:00.738954 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.738957 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.738960 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.738963 | orchestrator | 2026-02-23 20:40:00.738966 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-23 20:40:00.738970 | orchestrator | Monday 23 February 2026 20:33:07 +0000 (0:00:00.688) 0:03:39.411 ******* 2026-02-23 20:40:00.738975 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.738980 | orchestrator | 2026-02-23 20:40:00.738985 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-23 20:40:00.738991 | orchestrator | Monday 23 February 2026 20:33:09 +0000 (0:00:01.704) 0:03:41.115 ******* 2026-02-23 20:40:00.738996 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739001 | orchestrator | 2026-02-23 20:40:00.739007 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-23 20:40:00.739016 | orchestrator | Monday 23 February 2026 20:33:10 +0000 (0:00:01.065) 0:03:42.181 ******* 2026-02-23 20:40:00.739025 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.739029 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-23 20:40:00.739036 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.739039 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:40:00.739044 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-23 20:40:00.739049 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:40:00.739054 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:40:00.739059 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-23 20:40:00.739064 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-23 20:40:00.739070 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-23 20:40:00.739075 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:40:00.739080 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-02-23 20:40:00.739084 | orchestrator | 2026-02-23 20:40:00.739087 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-23 20:40:00.739091 | orchestrator | Monday 23 February 2026 20:33:13 +0000 (0:00:03.454) 0:03:45.636 ******* 2026-02-23 20:40:00.739097 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739101 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739104 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739107 | orchestrator | 2026-02-23 20:40:00.739110 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-23 20:40:00.739113 | orchestrator | Monday 23 February 2026 20:33:15 +0000 (0:00:01.412) 0:03:47.048 ******* 2026-02-23 20:40:00.739119 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739124 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739129 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739134 | orchestrator | 2026-02-23 20:40:00.739146 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-23 20:40:00.739150 | orchestrator | Monday 23 February 2026 20:33:15 +0000 (0:00:00.266) 0:03:47.315 ******* 2026-02-23 20:40:00.739153 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739156 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739159 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739162 | orchestrator | 2026-02-23 20:40:00.739166 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-23 20:40:00.739171 | orchestrator | Monday 23 February 2026 20:33:15 +0000 (0:00:00.430) 0:03:47.745 ******* 2026-02-23 20:40:00.739191 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739195 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739198 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739201 | orchestrator | 2026-02-23 20:40:00.739204 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-23 20:40:00.739207 | orchestrator | Monday 23 February 2026 20:33:17 +0000 (0:00:01.399) 0:03:49.145 ******* 2026-02-23 20:40:00.739211 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739214 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739217 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739220 | orchestrator | 2026-02-23 20:40:00.739223 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-23 20:40:00.739226 | orchestrator | Monday 23 February 2026 20:33:18 +0000 (0:00:01.364) 0:03:50.509 ******* 2026-02-23 20:40:00.739229 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739234 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739239 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739243 | orchestrator | 2026-02-23 20:40:00.739246 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-23 20:40:00.739254 | orchestrator | Monday 23 February 2026 20:33:18 +0000 (0:00:00.290) 0:03:50.799 ******* 2026-02-23 20:40:00.739257 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.739261 | orchestrator | 2026-02-23 20:40:00.739267 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-23 20:40:00.739272 | orchestrator | Monday 23 February 2026 20:33:19 +0000 (0:00:00.679) 0:03:51.478 ******* 2026-02-23 20:40:00.739278 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739281 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739285 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739290 | orchestrator | 2026-02-23 20:40:00.739295 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-23 20:40:00.739300 | orchestrator | Monday 23 February 2026 20:33:19 +0000 (0:00:00.275) 0:03:51.754 ******* 2026-02-23 20:40:00.739305 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739308 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739311 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739314 | orchestrator | 2026-02-23 20:40:00.739321 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-23 20:40:00.739324 | orchestrator | Monday 23 February 2026 20:33:20 +0000 (0:00:00.378) 0:03:52.132 ******* 2026-02-23 20:40:00.739327 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.739330 | orchestrator | 2026-02-23 20:40:00.739334 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-23 20:40:00.739338 | orchestrator | Monday 23 February 2026 20:33:21 +0000 (0:00:00.871) 0:03:53.003 ******* 2026-02-23 20:40:00.739344 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739348 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739351 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739354 | orchestrator | 2026-02-23 20:40:00.739359 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-23 20:40:00.739365 | orchestrator | Monday 23 February 2026 20:33:22 +0000 (0:00:01.495) 0:03:54.498 ******* 2026-02-23 20:40:00.739368 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739371 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739374 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739377 | orchestrator | 2026-02-23 20:40:00.739380 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-23 20:40:00.739386 | orchestrator | Monday 23 February 2026 20:33:23 +0000 (0:00:01.340) 0:03:55.839 ******* 2026-02-23 20:40:00.739389 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739392 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739395 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739398 | orchestrator | 2026-02-23 20:40:00.739401 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-23 20:40:00.739405 | orchestrator | Monday 23 February 2026 20:33:25 +0000 (0:00:01.840) 0:03:57.680 ******* 2026-02-23 20:40:00.739408 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.739411 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.739414 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.739417 | orchestrator | 2026-02-23 20:40:00.739420 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-23 20:40:00.739423 | orchestrator | Monday 23 February 2026 20:33:28 +0000 (0:00:02.318) 0:03:59.998 ******* 2026-02-23 20:40:00.739426 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.739429 | orchestrator | 2026-02-23 20:40:00.739432 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-23 20:40:00.739435 | orchestrator | Monday 23 February 2026 20:33:28 +0000 (0:00:00.497) 0:04:00.495 ******* 2026-02-23 20:40:00.739439 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-23 20:40:00.739445 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739451 | orchestrator | 2026-02-23 20:40:00.739456 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-23 20:40:00.739461 | orchestrator | Monday 23 February 2026 20:33:50 +0000 (0:00:21.691) 0:04:22.187 ******* 2026-02-23 20:40:00.739466 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739472 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739477 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739482 | orchestrator | 2026-02-23 20:40:00.739487 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-23 20:40:00.739492 | orchestrator | Monday 23 February 2026 20:34:00 +0000 (0:00:10.199) 0:04:32.386 ******* 2026-02-23 20:40:00.739497 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739501 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739504 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739507 | orchestrator | 2026-02-23 20:40:00.739511 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-23 20:40:00.739527 | orchestrator | Monday 23 February 2026 20:34:00 +0000 (0:00:00.526) 0:04:32.913 ******* 2026-02-23 20:40:00.739532 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-23 20:40:00.739537 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-23 20:40:00.739541 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-23 20:40:00.739545 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-23 20:40:00.739548 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-23 20:40:00.739552 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__47a603324f3418a01c97c8c43bd57b398669b867'}])  2026-02-23 20:40:00.739556 | orchestrator | 2026-02-23 20:40:00.739562 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-23 20:40:00.739565 | orchestrator | Monday 23 February 2026 20:34:15 +0000 (0:00:14.861) 0:04:47.774 ******* 2026-02-23 20:40:00.739568 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739574 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739577 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739580 | orchestrator | 2026-02-23 20:40:00.739583 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-23 20:40:00.739586 | orchestrator | Monday 23 February 2026 20:34:16 +0000 (0:00:00.305) 0:04:48.080 ******* 2026-02-23 20:40:00.739589 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.739592 | orchestrator | 2026-02-23 20:40:00.739596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-23 20:40:00.739599 | orchestrator | Monday 23 February 2026 20:34:16 +0000 (0:00:00.673) 0:04:48.753 ******* 2026-02-23 20:40:00.739602 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739605 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739608 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739611 | orchestrator | 2026-02-23 20:40:00.739614 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-23 20:40:00.739617 | orchestrator | Monday 23 February 2026 20:34:17 +0000 (0:00:00.405) 0:04:49.159 ******* 2026-02-23 20:40:00.739620 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739623 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739626 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739630 | orchestrator | 2026-02-23 20:40:00.739633 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-23 20:40:00.739636 | orchestrator | Monday 23 February 2026 20:34:17 +0000 (0:00:00.319) 0:04:49.478 ******* 2026-02-23 20:40:00.739639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-23 20:40:00.739642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-23 20:40:00.739645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-23 20:40:00.739648 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739651 | orchestrator | 2026-02-23 20:40:00.739655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-23 20:40:00.739658 | orchestrator | Monday 23 February 2026 20:34:18 +0000 (0:00:00.730) 0:04:50.208 ******* 2026-02-23 20:40:00.739661 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739673 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739676 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739679 | orchestrator | 2026-02-23 20:40:00.739682 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-23 20:40:00.739685 | orchestrator | 2026-02-23 20:40:00.739689 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.739692 | orchestrator | Monday 23 February 2026 20:34:19 +0000 (0:00:00.835) 0:04:51.043 ******* 2026-02-23 20:40:00.739695 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.739698 | orchestrator | 2026-02-23 20:40:00.739702 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.739705 | orchestrator | Monday 23 February 2026 20:34:19 +0000 (0:00:00.638) 0:04:51.681 ******* 2026-02-23 20:40:00.739708 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.739711 | orchestrator | 2026-02-23 20:40:00.739714 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.739717 | orchestrator | Monday 23 February 2026 20:34:20 +0000 (0:00:00.841) 0:04:52.523 ******* 2026-02-23 20:40:00.739720 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739723 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739727 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739730 | orchestrator | 2026-02-23 20:40:00.739733 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.739736 | orchestrator | Monday 23 February 2026 20:34:21 +0000 (0:00:00.755) 0:04:53.279 ******* 2026-02-23 20:40:00.739741 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739745 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739748 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739751 | orchestrator | 2026-02-23 20:40:00.739754 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.739757 | orchestrator | Monday 23 February 2026 20:34:21 +0000 (0:00:00.330) 0:04:53.610 ******* 2026-02-23 20:40:00.739760 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739763 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739766 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739770 | orchestrator | 2026-02-23 20:40:00.739773 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.739776 | orchestrator | Monday 23 February 2026 20:34:22 +0000 (0:00:00.434) 0:04:54.044 ******* 2026-02-23 20:40:00.739779 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739782 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739785 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739788 | orchestrator | 2026-02-23 20:40:00.739791 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.739794 | orchestrator | Monday 23 February 2026 20:34:22 +0000 (0:00:00.267) 0:04:54.311 ******* 2026-02-23 20:40:00.739797 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739801 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739804 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739807 | orchestrator | 2026-02-23 20:40:00.739810 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.739813 | orchestrator | Monday 23 February 2026 20:34:23 +0000 (0:00:00.683) 0:04:54.995 ******* 2026-02-23 20:40:00.739816 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739819 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739822 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739825 | orchestrator | 2026-02-23 20:40:00.739830 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.739833 | orchestrator | Monday 23 February 2026 20:34:23 +0000 (0:00:00.283) 0:04:55.279 ******* 2026-02-23 20:40:00.739836 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739839 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739842 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739845 | orchestrator | 2026-02-23 20:40:00.739849 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.739852 | orchestrator | Monday 23 February 2026 20:34:23 +0000 (0:00:00.279) 0:04:55.558 ******* 2026-02-23 20:40:00.739855 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739858 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739861 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739864 | orchestrator | 2026-02-23 20:40:00.739867 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.739870 | orchestrator | Monday 23 February 2026 20:34:24 +0000 (0:00:00.858) 0:04:56.417 ******* 2026-02-23 20:40:00.739873 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739877 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739880 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739883 | orchestrator | 2026-02-23 20:40:00.739886 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.739889 | orchestrator | Monday 23 February 2026 20:34:25 +0000 (0:00:00.631) 0:04:57.048 ******* 2026-02-23 20:40:00.739892 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739895 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739898 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739901 | orchestrator | 2026-02-23 20:40:00.739904 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.739908 | orchestrator | Monday 23 February 2026 20:34:25 +0000 (0:00:00.273) 0:04:57.322 ******* 2026-02-23 20:40:00.739911 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.739914 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.739919 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.739922 | orchestrator | 2026-02-23 20:40:00.739925 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.739928 | orchestrator | Monday 23 February 2026 20:34:25 +0000 (0:00:00.333) 0:04:57.655 ******* 2026-02-23 20:40:00.739931 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739934 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739938 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739941 | orchestrator | 2026-02-23 20:40:00.739944 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.739956 | orchestrator | Monday 23 February 2026 20:34:26 +0000 (0:00:00.438) 0:04:58.094 ******* 2026-02-23 20:40:00.739960 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739963 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739966 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739969 | orchestrator | 2026-02-23 20:40:00.739972 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.739975 | orchestrator | Monday 23 February 2026 20:34:26 +0000 (0:00:00.284) 0:04:58.378 ******* 2026-02-23 20:40:00.739978 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.739981 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.739984 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.739987 | orchestrator | 2026-02-23 20:40:00.739990 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.739994 | orchestrator | Monday 23 February 2026 20:34:26 +0000 (0:00:00.278) 0:04:58.657 ******* 2026-02-23 20:40:00.739997 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740000 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740003 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740006 | orchestrator | 2026-02-23 20:40:00.740009 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.740012 | orchestrator | Monday 23 February 2026 20:34:27 +0000 (0:00:00.284) 0:04:58.941 ******* 2026-02-23 20:40:00.740015 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740018 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740021 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740025 | orchestrator | 2026-02-23 20:40:00.740028 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.740031 | orchestrator | Monday 23 February 2026 20:34:27 +0000 (0:00:00.442) 0:04:59.383 ******* 2026-02-23 20:40:00.740034 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740037 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740040 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740043 | orchestrator | 2026-02-23 20:40:00.740046 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.740050 | orchestrator | Monday 23 February 2026 20:34:27 +0000 (0:00:00.304) 0:04:59.688 ******* 2026-02-23 20:40:00.740053 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740056 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740059 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740062 | orchestrator | 2026-02-23 20:40:00.740065 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.740068 | orchestrator | Monday 23 February 2026 20:34:28 +0000 (0:00:00.297) 0:04:59.986 ******* 2026-02-23 20:40:00.740071 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740074 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740077 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740080 | orchestrator | 2026-02-23 20:40:00.740083 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-23 20:40:00.740087 | orchestrator | Monday 23 February 2026 20:34:28 +0000 (0:00:00.629) 0:05:00.615 ******* 2026-02-23 20:40:00.740090 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-23 20:40:00.740093 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:40:00.740098 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:40:00.740101 | orchestrator | 2026-02-23 20:40:00.740104 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-23 20:40:00.740108 | orchestrator | Monday 23 February 2026 20:34:29 +0000 (0:00:00.553) 0:05:01.168 ******* 2026-02-23 20:40:00.740112 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.740116 | orchestrator | 2026-02-23 20:40:00.740119 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-23 20:40:00.740122 | orchestrator | Monday 23 February 2026 20:34:29 +0000 (0:00:00.473) 0:05:01.642 ******* 2026-02-23 20:40:00.740125 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740128 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740131 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740134 | orchestrator | 2026-02-23 20:40:00.740149 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-23 20:40:00.740153 | orchestrator | Monday 23 February 2026 20:34:30 +0000 (0:00:00.652) 0:05:02.295 ******* 2026-02-23 20:40:00.740156 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740159 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740162 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740165 | orchestrator | 2026-02-23 20:40:00.740168 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-23 20:40:00.740171 | orchestrator | Monday 23 February 2026 20:34:30 +0000 (0:00:00.446) 0:05:02.741 ******* 2026-02-23 20:40:00.740174 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:40:00.740177 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:40:00.740180 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:40:00.740184 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-23 20:40:00.740187 | orchestrator | 2026-02-23 20:40:00.740190 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-23 20:40:00.740193 | orchestrator | Monday 23 February 2026 20:34:40 +0000 (0:00:09.611) 0:05:12.352 ******* 2026-02-23 20:40:00.740196 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740199 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740202 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740205 | orchestrator | 2026-02-23 20:40:00.740208 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-23 20:40:00.740211 | orchestrator | Monday 23 February 2026 20:34:40 +0000 (0:00:00.404) 0:05:12.756 ******* 2026-02-23 20:40:00.740215 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-23 20:40:00.740218 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-23 20:40:00.740221 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-23 20:40:00.740224 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-23 20:40:00.740227 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.740243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.740248 | orchestrator | 2026-02-23 20:40:00.740254 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-23 20:40:00.740258 | orchestrator | Monday 23 February 2026 20:34:42 +0000 (0:00:02.071) 0:05:14.828 ******* 2026-02-23 20:40:00.740263 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-23 20:40:00.740267 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-23 20:40:00.740272 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-23 20:40:00.740277 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:40:00.740282 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-23 20:40:00.740287 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-23 20:40:00.740291 | orchestrator | 2026-02-23 20:40:00.740296 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-23 20:40:00.740304 | orchestrator | Monday 23 February 2026 20:34:43 +0000 (0:00:01.062) 0:05:15.891 ******* 2026-02-23 20:40:00.740310 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740314 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740320 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740324 | orchestrator | 2026-02-23 20:40:00.740330 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-23 20:40:00.740335 | orchestrator | Monday 23 February 2026 20:34:44 +0000 (0:00:00.851) 0:05:16.742 ******* 2026-02-23 20:40:00.740341 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740345 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740348 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740351 | orchestrator | 2026-02-23 20:40:00.740354 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-23 20:40:00.740357 | orchestrator | Monday 23 February 2026 20:34:45 +0000 (0:00:00.285) 0:05:17.028 ******* 2026-02-23 20:40:00.740361 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740364 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740367 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740370 | orchestrator | 2026-02-23 20:40:00.740373 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-23 20:40:00.740376 | orchestrator | Monday 23 February 2026 20:34:45 +0000 (0:00:00.288) 0:05:17.316 ******* 2026-02-23 20:40:00.740379 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.740382 | orchestrator | 2026-02-23 20:40:00.740385 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-23 20:40:00.740389 | orchestrator | Monday 23 February 2026 20:34:46 +0000 (0:00:00.642) 0:05:17.959 ******* 2026-02-23 20:40:00.740392 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740395 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740398 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740401 | orchestrator | 2026-02-23 20:40:00.740404 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-23 20:40:00.740407 | orchestrator | Monday 23 February 2026 20:34:46 +0000 (0:00:00.317) 0:05:18.276 ******* 2026-02-23 20:40:00.740410 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740413 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740416 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.740419 | orchestrator | 2026-02-23 20:40:00.740423 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-23 20:40:00.740426 | orchestrator | Monday 23 February 2026 20:34:46 +0000 (0:00:00.323) 0:05:18.599 ******* 2026-02-23 20:40:00.740431 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.740434 | orchestrator | 2026-02-23 20:40:00.740437 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-23 20:40:00.740440 | orchestrator | Monday 23 February 2026 20:34:47 +0000 (0:00:00.493) 0:05:19.093 ******* 2026-02-23 20:40:00.740444 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740449 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740454 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740459 | orchestrator | 2026-02-23 20:40:00.740463 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-23 20:40:00.740468 | orchestrator | Monday 23 February 2026 20:34:48 +0000 (0:00:01.287) 0:05:20.381 ******* 2026-02-23 20:40:00.740475 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740482 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740487 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740492 | orchestrator | 2026-02-23 20:40:00.740497 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-23 20:40:00.740501 | orchestrator | Monday 23 February 2026 20:34:49 +0000 (0:00:01.044) 0:05:21.426 ******* 2026-02-23 20:40:00.740506 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740514 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740519 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740523 | orchestrator | 2026-02-23 20:40:00.740528 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-23 20:40:00.740532 | orchestrator | Monday 23 February 2026 20:34:51 +0000 (0:00:01.674) 0:05:23.100 ******* 2026-02-23 20:40:00.740537 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740541 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740545 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740550 | orchestrator | 2026-02-23 20:40:00.740555 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-23 20:40:00.740560 | orchestrator | Monday 23 February 2026 20:34:53 +0000 (0:00:01.934) 0:05:25.035 ******* 2026-02-23 20:40:00.740565 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740570 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.740575 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-23 20:40:00.740580 | orchestrator | 2026-02-23 20:40:00.740585 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-23 20:40:00.740590 | orchestrator | Monday 23 February 2026 20:34:53 +0000 (0:00:00.520) 0:05:25.555 ******* 2026-02-23 20:40:00.740610 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-23 20:40:00.740614 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-23 20:40:00.740617 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-23 20:40:00.740620 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-23 20:40:00.740624 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-23 20:40:00.740627 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.740630 | orchestrator | 2026-02-23 20:40:00.740633 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-23 20:40:00.740636 | orchestrator | Monday 23 February 2026 20:35:23 +0000 (0:00:29.843) 0:05:55.399 ******* 2026-02-23 20:40:00.740639 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.740642 | orchestrator | 2026-02-23 20:40:00.740645 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-23 20:40:00.740648 | orchestrator | Monday 23 February 2026 20:35:24 +0000 (0:00:01.329) 0:05:56.729 ******* 2026-02-23 20:40:00.740652 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740655 | orchestrator | 2026-02-23 20:40:00.740658 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-23 20:40:00.740661 | orchestrator | Monday 23 February 2026 20:35:25 +0000 (0:00:00.303) 0:05:57.032 ******* 2026-02-23 20:40:00.740664 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740667 | orchestrator | 2026-02-23 20:40:00.740670 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-23 20:40:00.740673 | orchestrator | Monday 23 February 2026 20:35:25 +0000 (0:00:00.140) 0:05:57.172 ******* 2026-02-23 20:40:00.740677 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-23 20:40:00.740680 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-23 20:40:00.740694 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-23 20:40:00.740697 | orchestrator | 2026-02-23 20:40:00.740701 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-23 20:40:00.740704 | orchestrator | Monday 23 February 2026 20:35:31 +0000 (0:00:06.499) 0:06:03.672 ******* 2026-02-23 20:40:00.740707 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-23 20:40:00.740710 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-23 20:40:00.740716 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-23 20:40:00.740719 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-23 20:40:00.740722 | orchestrator | 2026-02-23 20:40:00.740726 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-23 20:40:00.740729 | orchestrator | Monday 23 February 2026 20:35:36 +0000 (0:00:05.181) 0:06:08.854 ******* 2026-02-23 20:40:00.740732 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740735 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740738 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740741 | orchestrator | 2026-02-23 20:40:00.740753 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-23 20:40:00.740757 | orchestrator | Monday 23 February 2026 20:35:37 +0000 (0:00:00.644) 0:06:09.498 ******* 2026-02-23 20:40:00.740760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.740763 | orchestrator | 2026-02-23 20:40:00.740766 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-23 20:40:00.740769 | orchestrator | Monday 23 February 2026 20:35:38 +0000 (0:00:00.436) 0:06:09.935 ******* 2026-02-23 20:40:00.740772 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740775 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740778 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740782 | orchestrator | 2026-02-23 20:40:00.740785 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-23 20:40:00.740788 | orchestrator | Monday 23 February 2026 20:35:38 +0000 (0:00:00.433) 0:06:10.369 ******* 2026-02-23 20:40:00.740791 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.740794 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.740797 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.740800 | orchestrator | 2026-02-23 20:40:00.740803 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-23 20:40:00.740807 | orchestrator | Monday 23 February 2026 20:35:39 +0000 (0:00:01.069) 0:06:11.438 ******* 2026-02-23 20:40:00.740810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-23 20:40:00.740813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-23 20:40:00.740816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-23 20:40:00.740819 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.740822 | orchestrator | 2026-02-23 20:40:00.740825 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-23 20:40:00.740828 | orchestrator | Monday 23 February 2026 20:35:40 +0000 (0:00:00.557) 0:06:11.996 ******* 2026-02-23 20:40:00.740832 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.740835 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.740838 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.740841 | orchestrator | 2026-02-23 20:40:00.740844 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-23 20:40:00.740847 | orchestrator | 2026-02-23 20:40:00.740850 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.740854 | orchestrator | Monday 23 February 2026 20:35:40 +0000 (0:00:00.639) 0:06:12.635 ******* 2026-02-23 20:40:00.740866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.740870 | orchestrator | 2026-02-23 20:40:00.740873 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.740876 | orchestrator | Monday 23 February 2026 20:35:41 +0000 (0:00:00.453) 0:06:13.089 ******* 2026-02-23 20:40:00.740879 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.740882 | orchestrator | 2026-02-23 20:40:00.740886 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.740892 | orchestrator | Monday 23 February 2026 20:35:41 +0000 (0:00:00.630) 0:06:13.720 ******* 2026-02-23 20:40:00.740895 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.740898 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.740902 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.740905 | orchestrator | 2026-02-23 20:40:00.740908 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.740911 | orchestrator | Monday 23 February 2026 20:35:42 +0000 (0:00:00.262) 0:06:13.982 ******* 2026-02-23 20:40:00.740914 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.740917 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.740920 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.740923 | orchestrator | 2026-02-23 20:40:00.740926 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.740929 | orchestrator | Monday 23 February 2026 20:35:42 +0000 (0:00:00.623) 0:06:14.605 ******* 2026-02-23 20:40:00.740933 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.740936 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.740939 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.740942 | orchestrator | 2026-02-23 20:40:00.740945 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.740948 | orchestrator | Monday 23 February 2026 20:35:43 +0000 (0:00:00.711) 0:06:15.317 ******* 2026-02-23 20:40:00.740951 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.740954 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.740957 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.740960 | orchestrator | 2026-02-23 20:40:00.740963 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.740967 | orchestrator | Monday 23 February 2026 20:35:44 +0000 (0:00:00.699) 0:06:16.017 ******* 2026-02-23 20:40:00.740970 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.740973 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.740976 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.740979 | orchestrator | 2026-02-23 20:40:00.740982 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.740985 | orchestrator | Monday 23 February 2026 20:35:44 +0000 (0:00:00.438) 0:06:16.455 ******* 2026-02-23 20:40:00.740988 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.740991 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.740994 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.740997 | orchestrator | 2026-02-23 20:40:00.741000 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.741004 | orchestrator | Monday 23 February 2026 20:35:44 +0000 (0:00:00.255) 0:06:16.710 ******* 2026-02-23 20:40:00.741007 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741010 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741013 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741016 | orchestrator | 2026-02-23 20:40:00.741019 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.741024 | orchestrator | Monday 23 February 2026 20:35:45 +0000 (0:00:00.259) 0:06:16.970 ******* 2026-02-23 20:40:00.741027 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741030 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741033 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741036 | orchestrator | 2026-02-23 20:40:00.741039 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.741043 | orchestrator | Monday 23 February 2026 20:35:45 +0000 (0:00:00.804) 0:06:17.774 ******* 2026-02-23 20:40:00.741046 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741049 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741053 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741058 | orchestrator | 2026-02-23 20:40:00.741063 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.741068 | orchestrator | Monday 23 February 2026 20:35:46 +0000 (0:00:00.945) 0:06:18.719 ******* 2026-02-23 20:40:00.741078 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741082 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741085 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741088 | orchestrator | 2026-02-23 20:40:00.741091 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.741094 | orchestrator | Monday 23 February 2026 20:35:47 +0000 (0:00:00.284) 0:06:19.003 ******* 2026-02-23 20:40:00.741097 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741100 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741103 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741106 | orchestrator | 2026-02-23 20:40:00.741110 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.741113 | orchestrator | Monday 23 February 2026 20:35:47 +0000 (0:00:00.249) 0:06:19.253 ******* 2026-02-23 20:40:00.741116 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741119 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741122 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741125 | orchestrator | 2026-02-23 20:40:00.741128 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.741131 | orchestrator | Monday 23 February 2026 20:35:47 +0000 (0:00:00.290) 0:06:19.543 ******* 2026-02-23 20:40:00.741134 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741160 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741165 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741168 | orchestrator | 2026-02-23 20:40:00.741171 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.741174 | orchestrator | Monday 23 February 2026 20:35:48 +0000 (0:00:00.450) 0:06:19.994 ******* 2026-02-23 20:40:00.741177 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741180 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741196 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741200 | orchestrator | 2026-02-23 20:40:00.741203 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.741206 | orchestrator | Monday 23 February 2026 20:35:48 +0000 (0:00:00.363) 0:06:20.357 ******* 2026-02-23 20:40:00.741210 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741213 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741216 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741219 | orchestrator | 2026-02-23 20:40:00.741222 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.741225 | orchestrator | Monday 23 February 2026 20:35:48 +0000 (0:00:00.262) 0:06:20.620 ******* 2026-02-23 20:40:00.741228 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741231 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741234 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741237 | orchestrator | 2026-02-23 20:40:00.741240 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.741243 | orchestrator | Monday 23 February 2026 20:35:48 +0000 (0:00:00.250) 0:06:20.871 ******* 2026-02-23 20:40:00.741247 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741250 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741253 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741256 | orchestrator | 2026-02-23 20:40:00.741259 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.741262 | orchestrator | Monday 23 February 2026 20:35:49 +0000 (0:00:00.457) 0:06:21.328 ******* 2026-02-23 20:40:00.741265 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741268 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741271 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741274 | orchestrator | 2026-02-23 20:40:00.741277 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.741280 | orchestrator | Monday 23 February 2026 20:35:49 +0000 (0:00:00.374) 0:06:21.703 ******* 2026-02-23 20:40:00.741284 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741287 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741293 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741296 | orchestrator | 2026-02-23 20:40:00.741299 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-23 20:40:00.741302 | orchestrator | Monday 23 February 2026 20:35:50 +0000 (0:00:00.582) 0:06:22.285 ******* 2026-02-23 20:40:00.741305 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741308 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741311 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741314 | orchestrator | 2026-02-23 20:40:00.741317 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-23 20:40:00.741321 | orchestrator | Monday 23 February 2026 20:35:50 +0000 (0:00:00.504) 0:06:22.790 ******* 2026-02-23 20:40:00.741324 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:40:00.741327 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:40:00.741330 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:40:00.741333 | orchestrator | 2026-02-23 20:40:00.741336 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-23 20:40:00.741339 | orchestrator | Monday 23 February 2026 20:35:51 +0000 (0:00:00.637) 0:06:23.428 ******* 2026-02-23 20:40:00.741342 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.741345 | orchestrator | 2026-02-23 20:40:00.741351 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-23 20:40:00.741354 | orchestrator | Monday 23 February 2026 20:35:51 +0000 (0:00:00.437) 0:06:23.865 ******* 2026-02-23 20:40:00.741357 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741360 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741363 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741366 | orchestrator | 2026-02-23 20:40:00.741369 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-23 20:40:00.741372 | orchestrator | Monday 23 February 2026 20:35:52 +0000 (0:00:00.401) 0:06:24.267 ******* 2026-02-23 20:40:00.741375 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741378 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741381 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741385 | orchestrator | 2026-02-23 20:40:00.741388 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-23 20:40:00.741391 | orchestrator | Monday 23 February 2026 20:35:52 +0000 (0:00:00.263) 0:06:24.531 ******* 2026-02-23 20:40:00.741394 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741397 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741400 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741403 | orchestrator | 2026-02-23 20:40:00.741406 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-23 20:40:00.741409 | orchestrator | Monday 23 February 2026 20:35:53 +0000 (0:00:00.592) 0:06:25.123 ******* 2026-02-23 20:40:00.741412 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741416 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741419 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741422 | orchestrator | 2026-02-23 20:40:00.741425 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-23 20:40:00.741428 | orchestrator | Monday 23 February 2026 20:35:53 +0000 (0:00:00.359) 0:06:25.483 ******* 2026-02-23 20:40:00.741431 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-23 20:40:00.741434 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-23 20:40:00.741437 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-23 20:40:00.741441 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-23 20:40:00.741446 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-23 20:40:00.741451 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-23 20:40:00.741454 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-23 20:40:00.741458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-23 20:40:00.741461 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-23 20:40:00.741464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-23 20:40:00.741467 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-23 20:40:00.741470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-23 20:40:00.741473 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-23 20:40:00.741476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-23 20:40:00.741479 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-23 20:40:00.741482 | orchestrator | 2026-02-23 20:40:00.741486 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-23 20:40:00.741489 | orchestrator | Monday 23 February 2026 20:35:57 +0000 (0:00:03.640) 0:06:29.124 ******* 2026-02-23 20:40:00.741492 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741495 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741498 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741501 | orchestrator | 2026-02-23 20:40:00.741504 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-23 20:40:00.741507 | orchestrator | Monday 23 February 2026 20:35:57 +0000 (0:00:00.269) 0:06:29.393 ******* 2026-02-23 20:40:00.741511 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.741514 | orchestrator | 2026-02-23 20:40:00.741517 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-23 20:40:00.741520 | orchestrator | Monday 23 February 2026 20:35:57 +0000 (0:00:00.444) 0:06:29.837 ******* 2026-02-23 20:40:00.741523 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-23 20:40:00.741526 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-23 20:40:00.741529 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-23 20:40:00.741533 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-23 20:40:00.741543 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-23 20:40:00.741546 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-23 20:40:00.741549 | orchestrator | 2026-02-23 20:40:00.741552 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-23 20:40:00.741555 | orchestrator | Monday 23 February 2026 20:35:59 +0000 (0:00:01.221) 0:06:31.058 ******* 2026-02-23 20:40:00.741559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.741562 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-23 20:40:00.741565 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:40:00.741568 | orchestrator | 2026-02-23 20:40:00.741573 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-23 20:40:00.741576 | orchestrator | Monday 23 February 2026 20:36:01 +0000 (0:00:02.233) 0:06:33.292 ******* 2026-02-23 20:40:00.741579 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:40:00.741583 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-23 20:40:00.741586 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.741589 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:40:00.741595 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-23 20:40:00.741598 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.741601 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:40:00.741604 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-23 20:40:00.741607 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.741610 | orchestrator | 2026-02-23 20:40:00.741613 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-23 20:40:00.741617 | orchestrator | Monday 23 February 2026 20:36:02 +0000 (0:00:01.153) 0:06:34.445 ******* 2026-02-23 20:40:00.741620 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.741623 | orchestrator | 2026-02-23 20:40:00.741626 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-23 20:40:00.741629 | orchestrator | Monday 23 February 2026 20:36:05 +0000 (0:00:02.533) 0:06:36.978 ******* 2026-02-23 20:40:00.741632 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.741635 | orchestrator | 2026-02-23 20:40:00.741638 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-23 20:40:00.741641 | orchestrator | Monday 23 February 2026 20:36:05 +0000 (0:00:00.497) 0:06:37.475 ******* 2026-02-23 20:40:00.741645 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-086e8658-baeb-56a9-865d-4af6c70c9ca3', 'data_vg': 'ceph-086e8658-baeb-56a9-865d-4af6c70c9ca3'}) 2026-02-23 20:40:00.741648 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16360c2d-86c0-538a-b982-f32cf88f5f8a', 'data_vg': 'ceph-16360c2d-86c0-538a-b982-f32cf88f5f8a'}) 2026-02-23 20:40:00.741656 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2b14837c-f03f-563c-b8ac-393f544981fc', 'data_vg': 'ceph-2b14837c-f03f-563c-b8ac-393f544981fc'}) 2026-02-23 20:40:00.741659 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-721c0c76-436b-5140-8464-e8c748d186e3', 'data_vg': 'ceph-721c0c76-436b-5140-8464-e8c748d186e3'}) 2026-02-23 20:40:00.741662 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-21252442-555c-5549-b537-6075952af6e0', 'data_vg': 'ceph-21252442-555c-5549-b537-6075952af6e0'}) 2026-02-23 20:40:00.741666 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fef89255-3917-5f7c-b809-8ef443377219', 'data_vg': 'ceph-fef89255-3917-5f7c-b809-8ef443377219'}) 2026-02-23 20:40:00.741669 | orchestrator | 2026-02-23 20:40:00.741672 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-23 20:40:00.741675 | orchestrator | Monday 23 February 2026 20:36:49 +0000 (0:00:44.339) 0:07:21.815 ******* 2026-02-23 20:40:00.741678 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741681 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741684 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741688 | orchestrator | 2026-02-23 20:40:00.741691 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-23 20:40:00.741694 | orchestrator | Monday 23 February 2026 20:36:50 +0000 (0:00:00.294) 0:07:22.110 ******* 2026-02-23 20:40:00.741697 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.741700 | orchestrator | 2026-02-23 20:40:00.741703 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-23 20:40:00.741706 | orchestrator | Monday 23 February 2026 20:36:50 +0000 (0:00:00.495) 0:07:22.606 ******* 2026-02-23 20:40:00.741710 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741713 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741716 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741719 | orchestrator | 2026-02-23 20:40:00.741722 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-23 20:40:00.741725 | orchestrator | Monday 23 February 2026 20:36:51 +0000 (0:00:00.895) 0:07:23.501 ******* 2026-02-23 20:40:00.741728 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.741734 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.741737 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.741740 | orchestrator | 2026-02-23 20:40:00.741743 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-23 20:40:00.741746 | orchestrator | Monday 23 February 2026 20:36:54 +0000 (0:00:02.595) 0:07:26.096 ******* 2026-02-23 20:40:00.741749 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.741752 | orchestrator | 2026-02-23 20:40:00.741756 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-23 20:40:00.741759 | orchestrator | Monday 23 February 2026 20:36:54 +0000 (0:00:00.509) 0:07:26.606 ******* 2026-02-23 20:40:00.741762 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.741765 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.741768 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.741771 | orchestrator | 2026-02-23 20:40:00.741774 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-23 20:40:00.741777 | orchestrator | Monday 23 February 2026 20:36:56 +0000 (0:00:01.499) 0:07:28.105 ******* 2026-02-23 20:40:00.741781 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.741784 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.741788 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.741791 | orchestrator | 2026-02-23 20:40:00.741794 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-23 20:40:00.741798 | orchestrator | Monday 23 February 2026 20:36:57 +0000 (0:00:01.079) 0:07:29.185 ******* 2026-02-23 20:40:00.741801 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.741804 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.741807 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.741810 | orchestrator | 2026-02-23 20:40:00.741813 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-23 20:40:00.741816 | orchestrator | Monday 23 February 2026 20:36:59 +0000 (0:00:01.833) 0:07:31.019 ******* 2026-02-23 20:40:00.741819 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741823 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741826 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741829 | orchestrator | 2026-02-23 20:40:00.741832 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-23 20:40:00.741835 | orchestrator | Monday 23 February 2026 20:36:59 +0000 (0:00:00.314) 0:07:31.334 ******* 2026-02-23 20:40:00.741838 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741841 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741844 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741847 | orchestrator | 2026-02-23 20:40:00.741851 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-23 20:40:00.741854 | orchestrator | Monday 23 February 2026 20:37:00 +0000 (0:00:00.627) 0:07:31.961 ******* 2026-02-23 20:40:00.741857 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-23 20:40:00.741860 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-23 20:40:00.741863 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-02-23 20:40:00.741866 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-23 20:40:00.741869 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-23 20:40:00.741872 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-23 20:40:00.741876 | orchestrator | 2026-02-23 20:40:00.741879 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-23 20:40:00.741882 | orchestrator | Monday 23 February 2026 20:37:01 +0000 (0:00:01.082) 0:07:33.044 ******* 2026-02-23 20:40:00.741885 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-23 20:40:00.741888 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-23 20:40:00.741891 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-23 20:40:00.741894 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-23 20:40:00.741899 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-23 20:40:00.741905 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-23 20:40:00.741908 | orchestrator | 2026-02-23 20:40:00.741912 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-23 20:40:00.741915 | orchestrator | Monday 23 February 2026 20:37:03 +0000 (0:00:01.916) 0:07:34.960 ******* 2026-02-23 20:40:00.741918 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-23 20:40:00.741921 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-23 20:40:00.741924 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-23 20:40:00.741927 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-23 20:40:00.741930 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-23 20:40:00.741934 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-23 20:40:00.741937 | orchestrator | 2026-02-23 20:40:00.741940 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-23 20:40:00.741943 | orchestrator | Monday 23 February 2026 20:37:06 +0000 (0:00:03.843) 0:07:38.804 ******* 2026-02-23 20:40:00.741946 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741949 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741952 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.741955 | orchestrator | 2026-02-23 20:40:00.741959 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-23 20:40:00.741962 | orchestrator | Monday 23 February 2026 20:37:09 +0000 (0:00:02.600) 0:07:41.404 ******* 2026-02-23 20:40:00.741965 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741968 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741971 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-23 20:40:00.741974 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.741978 | orchestrator | 2026-02-23 20:40:00.741981 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-23 20:40:00.741984 | orchestrator | Monday 23 February 2026 20:37:21 +0000 (0:00:12.381) 0:07:53.785 ******* 2026-02-23 20:40:00.741987 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.741990 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.741993 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.741996 | orchestrator | 2026-02-23 20:40:00.741999 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-23 20:40:00.742003 | orchestrator | Monday 23 February 2026 20:37:22 +0000 (0:00:00.874) 0:07:54.659 ******* 2026-02-23 20:40:00.742006 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742009 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742029 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742033 | orchestrator | 2026-02-23 20:40:00.742036 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-23 20:40:00.742039 | orchestrator | Monday 23 February 2026 20:37:23 +0000 (0:00:00.344) 0:07:55.004 ******* 2026-02-23 20:40:00.742043 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.742046 | orchestrator | 2026-02-23 20:40:00.742049 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-23 20:40:00.742052 | orchestrator | Monday 23 February 2026 20:37:23 +0000 (0:00:00.465) 0:07:55.470 ******* 2026-02-23 20:40:00.742055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.742058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.742063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.742066 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742069 | orchestrator | 2026-02-23 20:40:00.742072 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-23 20:40:00.742076 | orchestrator | Monday 23 February 2026 20:37:24 +0000 (0:00:00.525) 0:07:55.995 ******* 2026-02-23 20:40:00.742079 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742084 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742088 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742091 | orchestrator | 2026-02-23 20:40:00.742094 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-23 20:40:00.742097 | orchestrator | Monday 23 February 2026 20:37:24 +0000 (0:00:00.444) 0:07:56.440 ******* 2026-02-23 20:40:00.742100 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742103 | orchestrator | 2026-02-23 20:40:00.742106 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-23 20:40:00.742110 | orchestrator | Monday 23 February 2026 20:37:24 +0000 (0:00:00.197) 0:07:56.637 ******* 2026-02-23 20:40:00.742113 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742116 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742119 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742122 | orchestrator | 2026-02-23 20:40:00.742125 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-23 20:40:00.742128 | orchestrator | Monday 23 February 2026 20:37:24 +0000 (0:00:00.263) 0:07:56.901 ******* 2026-02-23 20:40:00.742132 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742147 | orchestrator | 2026-02-23 20:40:00.742153 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-23 20:40:00.742157 | orchestrator | Monday 23 February 2026 20:37:25 +0000 (0:00:00.218) 0:07:57.120 ******* 2026-02-23 20:40:00.742161 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742164 | orchestrator | 2026-02-23 20:40:00.742167 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-23 20:40:00.742170 | orchestrator | Monday 23 February 2026 20:37:25 +0000 (0:00:00.273) 0:07:57.393 ******* 2026-02-23 20:40:00.742176 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742181 | orchestrator | 2026-02-23 20:40:00.742186 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-23 20:40:00.742191 | orchestrator | Monday 23 February 2026 20:37:25 +0000 (0:00:00.119) 0:07:57.513 ******* 2026-02-23 20:40:00.742194 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742197 | orchestrator | 2026-02-23 20:40:00.742203 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-23 20:40:00.742207 | orchestrator | Monday 23 February 2026 20:37:25 +0000 (0:00:00.203) 0:07:57.716 ******* 2026-02-23 20:40:00.742210 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742213 | orchestrator | 2026-02-23 20:40:00.742216 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-23 20:40:00.742219 | orchestrator | Monday 23 February 2026 20:37:25 +0000 (0:00:00.192) 0:07:57.908 ******* 2026-02-23 20:40:00.742222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.742225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.742229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.742232 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742235 | orchestrator | 2026-02-23 20:40:00.742238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-23 20:40:00.742241 | orchestrator | Monday 23 February 2026 20:37:26 +0000 (0:00:00.695) 0:07:58.603 ******* 2026-02-23 20:40:00.742244 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742247 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742250 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742253 | orchestrator | 2026-02-23 20:40:00.742257 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-23 20:40:00.742260 | orchestrator | Monday 23 February 2026 20:37:26 +0000 (0:00:00.269) 0:07:58.872 ******* 2026-02-23 20:40:00.742263 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742266 | orchestrator | 2026-02-23 20:40:00.742269 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-23 20:40:00.742272 | orchestrator | Monday 23 February 2026 20:37:27 +0000 (0:00:00.190) 0:07:59.063 ******* 2026-02-23 20:40:00.742278 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742281 | orchestrator | 2026-02-23 20:40:00.742285 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-23 20:40:00.742288 | orchestrator | 2026-02-23 20:40:00.742291 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.742294 | orchestrator | Monday 23 February 2026 20:37:27 +0000 (0:00:00.593) 0:07:59.657 ******* 2026-02-23 20:40:00.742297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.742302 | orchestrator | 2026-02-23 20:40:00.742305 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.742308 | orchestrator | Monday 23 February 2026 20:37:28 +0000 (0:00:01.002) 0:08:00.659 ******* 2026-02-23 20:40:00.742311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.742314 | orchestrator | 2026-02-23 20:40:00.742317 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.742321 | orchestrator | Monday 23 February 2026 20:37:29 +0000 (0:00:01.007) 0:08:01.667 ******* 2026-02-23 20:40:00.742324 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742327 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742330 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742333 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742336 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742339 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742343 | orchestrator | 2026-02-23 20:40:00.742348 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.742351 | orchestrator | Monday 23 February 2026 20:37:30 +0000 (0:00:01.107) 0:08:02.775 ******* 2026-02-23 20:40:00.742354 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742357 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742360 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742363 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742367 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742370 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742373 | orchestrator | 2026-02-23 20:40:00.742376 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.742379 | orchestrator | Monday 23 February 2026 20:37:31 +0000 (0:00:00.721) 0:08:03.496 ******* 2026-02-23 20:40:00.742382 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742385 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742388 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742392 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742395 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742398 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742401 | orchestrator | 2026-02-23 20:40:00.742404 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.742407 | orchestrator | Monday 23 February 2026 20:37:32 +0000 (0:00:00.879) 0:08:04.376 ******* 2026-02-23 20:40:00.742410 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742413 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742416 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742419 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742422 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742426 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742429 | orchestrator | 2026-02-23 20:40:00.742432 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.742435 | orchestrator | Monday 23 February 2026 20:37:33 +0000 (0:00:00.847) 0:08:05.224 ******* 2026-02-23 20:40:00.742438 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742441 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742444 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742451 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742454 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742457 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742460 | orchestrator | 2026-02-23 20:40:00.742463 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.742466 | orchestrator | Monday 23 February 2026 20:37:34 +0000 (0:00:01.080) 0:08:06.304 ******* 2026-02-23 20:40:00.742469 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742473 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742478 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742481 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742484 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742487 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742492 | orchestrator | 2026-02-23 20:40:00.742496 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.742501 | orchestrator | Monday 23 February 2026 20:37:34 +0000 (0:00:00.556) 0:08:06.860 ******* 2026-02-23 20:40:00.742505 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742510 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742515 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742519 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742524 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742529 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742534 | orchestrator | 2026-02-23 20:40:00.742540 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.742545 | orchestrator | Monday 23 February 2026 20:37:35 +0000 (0:00:00.743) 0:08:07.604 ******* 2026-02-23 20:40:00.742550 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742554 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742560 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742565 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742569 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742575 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742580 | orchestrator | 2026-02-23 20:40:00.742586 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.742589 | orchestrator | Monday 23 February 2026 20:37:36 +0000 (0:00:00.998) 0:08:08.603 ******* 2026-02-23 20:40:00.742592 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742595 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742599 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742602 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742605 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742608 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742611 | orchestrator | 2026-02-23 20:40:00.742614 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.742617 | orchestrator | Monday 23 February 2026 20:37:37 +0000 (0:00:01.131) 0:08:09.734 ******* 2026-02-23 20:40:00.742620 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742623 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742626 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742629 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742632 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742635 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742639 | orchestrator | 2026-02-23 20:40:00.742642 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.742645 | orchestrator | Monday 23 February 2026 20:37:38 +0000 (0:00:00.494) 0:08:10.229 ******* 2026-02-23 20:40:00.742648 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742651 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742654 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742657 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742660 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742663 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742666 | orchestrator | 2026-02-23 20:40:00.742669 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.742675 | orchestrator | Monday 23 February 2026 20:37:38 +0000 (0:00:00.671) 0:08:10.900 ******* 2026-02-23 20:40:00.742679 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742682 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742685 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742688 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742691 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742694 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742697 | orchestrator | 2026-02-23 20:40:00.742702 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.742706 | orchestrator | Monday 23 February 2026 20:37:39 +0000 (0:00:00.523) 0:08:11.424 ******* 2026-02-23 20:40:00.742709 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742712 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742715 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742718 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742721 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742724 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742727 | orchestrator | 2026-02-23 20:40:00.742730 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.742735 | orchestrator | Monday 23 February 2026 20:37:40 +0000 (0:00:00.697) 0:08:12.122 ******* 2026-02-23 20:40:00.742740 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742745 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742748 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742751 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742754 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742758 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742761 | orchestrator | 2026-02-23 20:40:00.742764 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.742769 | orchestrator | Monday 23 February 2026 20:37:40 +0000 (0:00:00.534) 0:08:12.657 ******* 2026-02-23 20:40:00.742775 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742780 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742785 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742791 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742796 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742801 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742806 | orchestrator | 2026-02-23 20:40:00.742809 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.742812 | orchestrator | Monday 23 February 2026 20:37:41 +0000 (0:00:00.653) 0:08:13.310 ******* 2026-02-23 20:40:00.742815 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742818 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742821 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742824 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:00.742827 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:00.742830 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:00.742833 | orchestrator | 2026-02-23 20:40:00.742836 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.742839 | orchestrator | Monday 23 February 2026 20:37:41 +0000 (0:00:00.512) 0:08:13.823 ******* 2026-02-23 20:40:00.742846 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.742849 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.742852 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.742855 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742858 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742861 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742864 | orchestrator | 2026-02-23 20:40:00.742867 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.742871 | orchestrator | Monday 23 February 2026 20:37:42 +0000 (0:00:00.673) 0:08:14.496 ******* 2026-02-23 20:40:00.742874 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742879 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742882 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742886 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742889 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742892 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742895 | orchestrator | 2026-02-23 20:40:00.742898 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.742901 | orchestrator | Monday 23 February 2026 20:37:43 +0000 (0:00:00.543) 0:08:15.040 ******* 2026-02-23 20:40:00.742904 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.742907 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.742910 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.742913 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742916 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.742920 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.742923 | orchestrator | 2026-02-23 20:40:00.742926 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-23 20:40:00.742929 | orchestrator | Monday 23 February 2026 20:37:44 +0000 (0:00:01.055) 0:08:16.095 ******* 2026-02-23 20:40:00.742932 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.742935 | orchestrator | 2026-02-23 20:40:00.742938 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-23 20:40:00.742941 | orchestrator | Monday 23 February 2026 20:37:48 +0000 (0:00:04.241) 0:08:20.337 ******* 2026-02-23 20:40:00.742944 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.742947 | orchestrator | 2026-02-23 20:40:00.742951 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-23 20:40:00.742954 | orchestrator | Monday 23 February 2026 20:37:50 +0000 (0:00:02.060) 0:08:22.398 ******* 2026-02-23 20:40:00.742957 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.742960 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.742963 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.742966 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.742969 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.742972 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.742975 | orchestrator | 2026-02-23 20:40:00.742978 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-23 20:40:00.742981 | orchestrator | Monday 23 February 2026 20:37:52 +0000 (0:00:01.872) 0:08:24.271 ******* 2026-02-23 20:40:00.742984 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.742988 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.742991 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.742994 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.742999 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.743005 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.743010 | orchestrator | 2026-02-23 20:40:00.743015 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-23 20:40:00.743020 | orchestrator | Monday 23 February 2026 20:37:53 +0000 (0:00:00.993) 0:08:25.264 ******* 2026-02-23 20:40:00.743028 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.743034 | orchestrator | 2026-02-23 20:40:00.743039 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-23 20:40:00.743043 | orchestrator | Monday 23 February 2026 20:37:54 +0000 (0:00:01.047) 0:08:26.311 ******* 2026-02-23 20:40:00.743047 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.743050 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.743053 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.743056 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.743059 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.743062 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.743065 | orchestrator | 2026-02-23 20:40:00.743068 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-23 20:40:00.743076 | orchestrator | Monday 23 February 2026 20:37:55 +0000 (0:00:01.614) 0:08:27.925 ******* 2026-02-23 20:40:00.743079 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.743082 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.743085 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.743088 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.743092 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.743095 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.743098 | orchestrator | 2026-02-23 20:40:00.743101 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-23 20:40:00.743105 | orchestrator | Monday 23 February 2026 20:37:59 +0000 (0:00:03.170) 0:08:31.096 ******* 2026-02-23 20:40:00.743110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:00.743115 | orchestrator | 2026-02-23 20:40:00.743120 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-23 20:40:00.743124 | orchestrator | Monday 23 February 2026 20:38:00 +0000 (0:00:01.198) 0:08:32.294 ******* 2026-02-23 20:40:00.743129 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743133 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743148 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743153 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.743159 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.743164 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.743169 | orchestrator | 2026-02-23 20:40:00.743174 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-23 20:40:00.743179 | orchestrator | Monday 23 February 2026 20:38:01 +0000 (0:00:00.847) 0:08:33.142 ******* 2026-02-23 20:40:00.743188 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.743193 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.743198 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.743201 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:00.743204 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:00.743207 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:00.743210 | orchestrator | 2026-02-23 20:40:00.743213 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-23 20:40:00.743216 | orchestrator | Monday 23 February 2026 20:38:04 +0000 (0:00:03.484) 0:08:36.626 ******* 2026-02-23 20:40:00.743219 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743222 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743226 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743229 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:00.743232 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:00.743235 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:00.743238 | orchestrator | 2026-02-23 20:40:00.743241 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-23 20:40:00.743244 | orchestrator | 2026-02-23 20:40:00.743247 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.743250 | orchestrator | Monday 23 February 2026 20:38:05 +0000 (0:00:00.980) 0:08:37.607 ******* 2026-02-23 20:40:00.743253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.743257 | orchestrator | 2026-02-23 20:40:00.743260 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.743263 | orchestrator | Monday 23 February 2026 20:38:06 +0000 (0:00:00.456) 0:08:38.063 ******* 2026-02-23 20:40:00.743266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.743269 | orchestrator | 2026-02-23 20:40:00.743272 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.743275 | orchestrator | Monday 23 February 2026 20:38:06 +0000 (0:00:00.572) 0:08:38.636 ******* 2026-02-23 20:40:00.743282 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743285 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743288 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743291 | orchestrator | 2026-02-23 20:40:00.743294 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.743297 | orchestrator | Monday 23 February 2026 20:38:06 +0000 (0:00:00.222) 0:08:38.858 ******* 2026-02-23 20:40:00.743301 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743304 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743307 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743310 | orchestrator | 2026-02-23 20:40:00.743313 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.743316 | orchestrator | Monday 23 February 2026 20:38:07 +0000 (0:00:00.642) 0:08:39.500 ******* 2026-02-23 20:40:00.743319 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743322 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743325 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743328 | orchestrator | 2026-02-23 20:40:00.743332 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.743335 | orchestrator | Monday 23 February 2026 20:38:08 +0000 (0:00:00.926) 0:08:40.426 ******* 2026-02-23 20:40:00.743338 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743341 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743344 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743347 | orchestrator | 2026-02-23 20:40:00.743350 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.743356 | orchestrator | Monday 23 February 2026 20:38:09 +0000 (0:00:00.757) 0:08:41.184 ******* 2026-02-23 20:40:00.743359 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743362 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743365 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743369 | orchestrator | 2026-02-23 20:40:00.743372 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.743375 | orchestrator | Monday 23 February 2026 20:38:09 +0000 (0:00:00.307) 0:08:41.491 ******* 2026-02-23 20:40:00.743378 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743381 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743384 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743387 | orchestrator | 2026-02-23 20:40:00.743390 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.743393 | orchestrator | Monday 23 February 2026 20:38:09 +0000 (0:00:00.267) 0:08:41.759 ******* 2026-02-23 20:40:00.743397 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743400 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743403 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743406 | orchestrator | 2026-02-23 20:40:00.743409 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.743412 | orchestrator | Monday 23 February 2026 20:38:10 +0000 (0:00:00.499) 0:08:42.258 ******* 2026-02-23 20:40:00.743415 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743418 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743421 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743424 | orchestrator | 2026-02-23 20:40:00.743428 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.743431 | orchestrator | Monday 23 February 2026 20:38:11 +0000 (0:00:00.674) 0:08:42.933 ******* 2026-02-23 20:40:00.743434 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743437 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743440 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743443 | orchestrator | 2026-02-23 20:40:00.743446 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.743449 | orchestrator | Monday 23 February 2026 20:38:11 +0000 (0:00:00.728) 0:08:43.662 ******* 2026-02-23 20:40:00.743452 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743455 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743461 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743464 | orchestrator | 2026-02-23 20:40:00.743467 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.743470 | orchestrator | Monday 23 February 2026 20:38:11 +0000 (0:00:00.259) 0:08:43.921 ******* 2026-02-23 20:40:00.743475 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743479 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743482 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743485 | orchestrator | 2026-02-23 20:40:00.743488 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.743491 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.437) 0:08:44.358 ******* 2026-02-23 20:40:00.743494 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743497 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743500 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743504 | orchestrator | 2026-02-23 20:40:00.743507 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.743510 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.281) 0:08:44.640 ******* 2026-02-23 20:40:00.743513 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743516 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743519 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743522 | orchestrator | 2026-02-23 20:40:00.743525 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.743529 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.277) 0:08:44.918 ******* 2026-02-23 20:40:00.743532 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743535 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743539 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743544 | orchestrator | 2026-02-23 20:40:00.743549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.743554 | orchestrator | Monday 23 February 2026 20:38:13 +0000 (0:00:00.313) 0:08:45.231 ******* 2026-02-23 20:40:00.743558 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743563 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743567 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743572 | orchestrator | 2026-02-23 20:40:00.743576 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.743581 | orchestrator | Monday 23 February 2026 20:38:13 +0000 (0:00:00.413) 0:08:45.645 ******* 2026-02-23 20:40:00.743586 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743591 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743596 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743601 | orchestrator | 2026-02-23 20:40:00.743605 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.743610 | orchestrator | Monday 23 February 2026 20:38:13 +0000 (0:00:00.253) 0:08:45.899 ******* 2026-02-23 20:40:00.743615 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743620 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743625 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743629 | orchestrator | 2026-02-23 20:40:00.743633 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.743638 | orchestrator | Monday 23 February 2026 20:38:14 +0000 (0:00:00.298) 0:08:46.197 ******* 2026-02-23 20:40:00.743643 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743648 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743653 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743658 | orchestrator | 2026-02-23 20:40:00.743663 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.743668 | orchestrator | Monday 23 February 2026 20:38:14 +0000 (0:00:00.450) 0:08:46.647 ******* 2026-02-23 20:40:00.743673 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.743677 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.743682 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.743687 | orchestrator | 2026-02-23 20:40:00.743692 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-23 20:40:00.743703 | orchestrator | Monday 23 February 2026 20:38:15 +0000 (0:00:00.692) 0:08:47.339 ******* 2026-02-23 20:40:00.743712 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.743717 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.743723 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-23 20:40:00.743729 | orchestrator | 2026-02-23 20:40:00.743735 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-23 20:40:00.743741 | orchestrator | Monday 23 February 2026 20:38:15 +0000 (0:00:00.331) 0:08:47.671 ******* 2026-02-23 20:40:00.743747 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.743752 | orchestrator | 2026-02-23 20:40:00.743759 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-23 20:40:00.743764 | orchestrator | Monday 23 February 2026 20:38:18 +0000 (0:00:02.465) 0:08:50.136 ******* 2026-02-23 20:40:00.743771 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-23 20:40:00.743779 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.743785 | orchestrator | 2026-02-23 20:40:00.743791 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-23 20:40:00.743796 | orchestrator | Monday 23 February 2026 20:38:18 +0000 (0:00:00.212) 0:08:50.348 ******* 2026-02-23 20:40:00.743804 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:40:00.743813 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:40:00.743820 | orchestrator | 2026-02-23 20:40:00.743827 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-23 20:40:00.743837 | orchestrator | Monday 23 February 2026 20:38:26 +0000 (0:00:08.124) 0:08:58.473 ******* 2026-02-23 20:40:00.743843 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:40:00.743848 | orchestrator | 2026-02-23 20:40:00.743854 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-23 20:40:00.743859 | orchestrator | Monday 23 February 2026 20:38:30 +0000 (0:00:03.672) 0:09:02.145 ******* 2026-02-23 20:40:00.743864 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.743869 | orchestrator | 2026-02-23 20:40:00.743873 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-23 20:40:00.743878 | orchestrator | Monday 23 February 2026 20:38:30 +0000 (0:00:00.536) 0:09:02.682 ******* 2026-02-23 20:40:00.743883 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-23 20:40:00.743888 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-23 20:40:00.743893 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-23 20:40:00.743898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-23 20:40:00.743903 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-23 20:40:00.743909 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-23 20:40:00.743914 | orchestrator | 2026-02-23 20:40:00.743919 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-23 20:40:00.743924 | orchestrator | Monday 23 February 2026 20:38:31 +0000 (0:00:00.981) 0:09:03.663 ******* 2026-02-23 20:40:00.743934 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.743939 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-23 20:40:00.743945 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:40:00.743950 | orchestrator | 2026-02-23 20:40:00.743955 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-23 20:40:00.743960 | orchestrator | Monday 23 February 2026 20:38:33 +0000 (0:00:02.240) 0:09:05.904 ******* 2026-02-23 20:40:00.743966 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:40:00.743971 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-23 20:40:00.743976 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.743982 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:40:00.743987 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-23 20:40:00.743993 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.743998 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:40:00.744003 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-23 20:40:00.744008 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744014 | orchestrator | 2026-02-23 20:40:00.744019 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-23 20:40:00.744024 | orchestrator | Monday 23 February 2026 20:38:35 +0000 (0:00:01.437) 0:09:07.341 ******* 2026-02-23 20:40:00.744030 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744035 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744040 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744045 | orchestrator | 2026-02-23 20:40:00.744050 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-23 20:40:00.744054 | orchestrator | Monday 23 February 2026 20:38:38 +0000 (0:00:02.830) 0:09:10.172 ******* 2026-02-23 20:40:00.744061 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744064 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744067 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744070 | orchestrator | 2026-02-23 20:40:00.744073 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-23 20:40:00.744077 | orchestrator | Monday 23 February 2026 20:38:38 +0000 (0:00:00.288) 0:09:10.460 ******* 2026-02-23 20:40:00.744080 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744083 | orchestrator | 2026-02-23 20:40:00.744086 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-23 20:40:00.744089 | orchestrator | Monday 23 February 2026 20:38:39 +0000 (0:00:00.608) 0:09:11.068 ******* 2026-02-23 20:40:00.744092 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744095 | orchestrator | 2026-02-23 20:40:00.744099 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-23 20:40:00.744102 | orchestrator | Monday 23 February 2026 20:38:39 +0000 (0:00:00.483) 0:09:11.551 ******* 2026-02-23 20:40:00.744105 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744108 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744111 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744114 | orchestrator | 2026-02-23 20:40:00.744117 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-23 20:40:00.744120 | orchestrator | Monday 23 February 2026 20:38:40 +0000 (0:00:01.295) 0:09:12.847 ******* 2026-02-23 20:40:00.744123 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744127 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744131 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744163 | orchestrator | 2026-02-23 20:40:00.744169 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-23 20:40:00.744172 | orchestrator | Monday 23 February 2026 20:38:42 +0000 (0:00:01.340) 0:09:14.187 ******* 2026-02-23 20:40:00.744179 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744182 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744185 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744188 | orchestrator | 2026-02-23 20:40:00.744191 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-23 20:40:00.744194 | orchestrator | Monday 23 February 2026 20:38:44 +0000 (0:00:02.097) 0:09:16.285 ******* 2026-02-23 20:40:00.744199 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744210 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744214 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744217 | orchestrator | 2026-02-23 20:40:00.744220 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-23 20:40:00.744223 | orchestrator | Monday 23 February 2026 20:38:46 +0000 (0:00:02.097) 0:09:18.383 ******* 2026-02-23 20:40:00.744227 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744232 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744237 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744242 | orchestrator | 2026-02-23 20:40:00.744247 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-23 20:40:00.744253 | orchestrator | Monday 23 February 2026 20:38:47 +0000 (0:00:01.191) 0:09:19.574 ******* 2026-02-23 20:40:00.744258 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744263 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744268 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744274 | orchestrator | 2026-02-23 20:40:00.744277 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-23 20:40:00.744281 | orchestrator | Monday 23 February 2026 20:38:48 +0000 (0:00:00.653) 0:09:20.227 ******* 2026-02-23 20:40:00.744284 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744287 | orchestrator | 2026-02-23 20:40:00.744290 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-23 20:40:00.744293 | orchestrator | Monday 23 February 2026 20:38:48 +0000 (0:00:00.592) 0:09:20.819 ******* 2026-02-23 20:40:00.744296 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744299 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744302 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744305 | orchestrator | 2026-02-23 20:40:00.744309 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-23 20:40:00.744312 | orchestrator | Monday 23 February 2026 20:38:49 +0000 (0:00:00.324) 0:09:21.144 ******* 2026-02-23 20:40:00.744315 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744318 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744321 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744324 | orchestrator | 2026-02-23 20:40:00.744327 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-23 20:40:00.744330 | orchestrator | Monday 23 February 2026 20:38:50 +0000 (0:00:01.317) 0:09:22.461 ******* 2026-02-23 20:40:00.744334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.744337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.744340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.744343 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744346 | orchestrator | 2026-02-23 20:40:00.744349 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-23 20:40:00.744352 | orchestrator | Monday 23 February 2026 20:38:51 +0000 (0:00:00.725) 0:09:23.186 ******* 2026-02-23 20:40:00.744355 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744359 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744362 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744365 | orchestrator | 2026-02-23 20:40:00.744368 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-23 20:40:00.744371 | orchestrator | 2026-02-23 20:40:00.744374 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-23 20:40:00.744380 | orchestrator | Monday 23 February 2026 20:38:51 +0000 (0:00:00.718) 0:09:23.905 ******* 2026-02-23 20:40:00.744388 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744391 | orchestrator | 2026-02-23 20:40:00.744394 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-23 20:40:00.744398 | orchestrator | Monday 23 February 2026 20:38:52 +0000 (0:00:00.435) 0:09:24.340 ******* 2026-02-23 20:40:00.744401 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744404 | orchestrator | 2026-02-23 20:40:00.744407 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-23 20:40:00.744410 | orchestrator | Monday 23 February 2026 20:38:53 +0000 (0:00:00.626) 0:09:24.967 ******* 2026-02-23 20:40:00.744413 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744416 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744419 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744422 | orchestrator | 2026-02-23 20:40:00.744426 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-23 20:40:00.744429 | orchestrator | Monday 23 February 2026 20:38:53 +0000 (0:00:00.303) 0:09:25.270 ******* 2026-02-23 20:40:00.744432 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744435 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744438 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744441 | orchestrator | 2026-02-23 20:40:00.744444 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-23 20:40:00.744447 | orchestrator | Monday 23 February 2026 20:38:54 +0000 (0:00:00.726) 0:09:25.996 ******* 2026-02-23 20:40:00.744450 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744454 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744457 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744460 | orchestrator | 2026-02-23 20:40:00.744463 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-23 20:40:00.744466 | orchestrator | Monday 23 February 2026 20:38:54 +0000 (0:00:00.739) 0:09:26.736 ******* 2026-02-23 20:40:00.744469 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744472 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744475 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744478 | orchestrator | 2026-02-23 20:40:00.744481 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-23 20:40:00.744485 | orchestrator | Monday 23 February 2026 20:38:55 +0000 (0:00:01.043) 0:09:27.780 ******* 2026-02-23 20:40:00.744490 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744495 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744500 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744506 | orchestrator | 2026-02-23 20:40:00.744515 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-23 20:40:00.744520 | orchestrator | Monday 23 February 2026 20:38:56 +0000 (0:00:00.274) 0:09:28.054 ******* 2026-02-23 20:40:00.744525 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744531 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744535 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744538 | orchestrator | 2026-02-23 20:40:00.744541 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-23 20:40:00.744544 | orchestrator | Monday 23 February 2026 20:38:56 +0000 (0:00:00.274) 0:09:28.328 ******* 2026-02-23 20:40:00.744547 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744550 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744553 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744557 | orchestrator | 2026-02-23 20:40:00.744560 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-23 20:40:00.744563 | orchestrator | Monday 23 February 2026 20:38:56 +0000 (0:00:00.333) 0:09:28.662 ******* 2026-02-23 20:40:00.744566 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744572 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744575 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744578 | orchestrator | 2026-02-23 20:40:00.744581 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-23 20:40:00.744585 | orchestrator | Monday 23 February 2026 20:38:57 +0000 (0:00:00.876) 0:09:29.538 ******* 2026-02-23 20:40:00.744588 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744591 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744594 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744597 | orchestrator | 2026-02-23 20:40:00.744600 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-23 20:40:00.744603 | orchestrator | Monday 23 February 2026 20:38:58 +0000 (0:00:00.635) 0:09:30.174 ******* 2026-02-23 20:40:00.744607 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744610 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744613 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744616 | orchestrator | 2026-02-23 20:40:00.744619 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-23 20:40:00.744622 | orchestrator | Monday 23 February 2026 20:38:58 +0000 (0:00:00.285) 0:09:30.459 ******* 2026-02-23 20:40:00.744625 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744629 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744634 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744639 | orchestrator | 2026-02-23 20:40:00.744644 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-23 20:40:00.744647 | orchestrator | Monday 23 February 2026 20:38:58 +0000 (0:00:00.265) 0:09:30.725 ******* 2026-02-23 20:40:00.744650 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744653 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744657 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744660 | orchestrator | 2026-02-23 20:40:00.744663 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-23 20:40:00.744666 | orchestrator | Monday 23 February 2026 20:38:59 +0000 (0:00:00.470) 0:09:31.195 ******* 2026-02-23 20:40:00.744669 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744672 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744675 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744678 | orchestrator | 2026-02-23 20:40:00.744681 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-23 20:40:00.744684 | orchestrator | Monday 23 February 2026 20:38:59 +0000 (0:00:00.305) 0:09:31.501 ******* 2026-02-23 20:40:00.744688 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744691 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744694 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744697 | orchestrator | 2026-02-23 20:40:00.744702 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-23 20:40:00.744705 | orchestrator | Monday 23 February 2026 20:38:59 +0000 (0:00:00.307) 0:09:31.808 ******* 2026-02-23 20:40:00.744709 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744714 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744719 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744724 | orchestrator | 2026-02-23 20:40:00.744730 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-23 20:40:00.744735 | orchestrator | Monday 23 February 2026 20:39:00 +0000 (0:00:00.250) 0:09:32.059 ******* 2026-02-23 20:40:00.744741 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744746 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744752 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744757 | orchestrator | 2026-02-23 20:40:00.744761 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-23 20:40:00.744766 | orchestrator | Monday 23 February 2026 20:39:00 +0000 (0:00:00.433) 0:09:32.493 ******* 2026-02-23 20:40:00.744770 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744773 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744776 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744782 | orchestrator | 2026-02-23 20:40:00.744785 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-23 20:40:00.744788 | orchestrator | Monday 23 February 2026 20:39:00 +0000 (0:00:00.281) 0:09:32.774 ******* 2026-02-23 20:40:00.744791 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744794 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744797 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744800 | orchestrator | 2026-02-23 20:40:00.744803 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-23 20:40:00.744807 | orchestrator | Monday 23 February 2026 20:39:01 +0000 (0:00:00.297) 0:09:33.072 ******* 2026-02-23 20:40:00.744810 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.744813 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.744816 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.744819 | orchestrator | 2026-02-23 20:40:00.744822 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-23 20:40:00.744825 | orchestrator | Monday 23 February 2026 20:39:01 +0000 (0:00:00.650) 0:09:33.722 ******* 2026-02-23 20:40:00.744828 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744831 | orchestrator | 2026-02-23 20:40:00.744834 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-23 20:40:00.744840 | orchestrator | Monday 23 February 2026 20:39:02 +0000 (0:00:00.481) 0:09:34.203 ******* 2026-02-23 20:40:00.744843 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.744847 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-23 20:40:00.744850 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:40:00.744853 | orchestrator | 2026-02-23 20:40:00.744856 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-23 20:40:00.744859 | orchestrator | Monday 23 February 2026 20:39:04 +0000 (0:00:01.882) 0:09:36.085 ******* 2026-02-23 20:40:00.744862 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:40:00.744865 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-23 20:40:00.744870 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.744876 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:40:00.744881 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-23 20:40:00.744886 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.744892 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:40:00.744897 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-23 20:40:00.744902 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.744907 | orchestrator | 2026-02-23 20:40:00.744913 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-23 20:40:00.744916 | orchestrator | Monday 23 February 2026 20:39:05 +0000 (0:00:01.215) 0:09:37.301 ******* 2026-02-23 20:40:00.744920 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.744923 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.744926 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.744929 | orchestrator | 2026-02-23 20:40:00.744932 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-23 20:40:00.744936 | orchestrator | Monday 23 February 2026 20:39:05 +0000 (0:00:00.294) 0:09:37.596 ******* 2026-02-23 20:40:00.744942 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.744946 | orchestrator | 2026-02-23 20:40:00.744952 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-23 20:40:00.744957 | orchestrator | Monday 23 February 2026 20:39:06 +0000 (0:00:00.494) 0:09:38.090 ******* 2026-02-23 20:40:00.744962 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.744972 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.744978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.744982 | orchestrator | 2026-02-23 20:40:00.744987 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-23 20:40:00.744993 | orchestrator | Monday 23 February 2026 20:39:07 +0000 (0:00:00.910) 0:09:39.000 ******* 2026-02-23 20:40:00.744999 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.745004 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-23 20:40:00.745007 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.745010 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-23 20:40:00.745013 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.745017 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-23 20:40:00.745020 | orchestrator | 2026-02-23 20:40:00.745023 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-23 20:40:00.745026 | orchestrator | Monday 23 February 2026 20:39:11 +0000 (0:00:04.861) 0:09:43.862 ******* 2026-02-23 20:40:00.745029 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.745032 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.745036 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:40:00.745039 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:40:00.745042 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:40:00.745045 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:40:00.745048 | orchestrator | 2026-02-23 20:40:00.745051 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-23 20:40:00.745073 | orchestrator | Monday 23 February 2026 20:39:14 +0000 (0:00:02.384) 0:09:46.246 ******* 2026-02-23 20:40:00.745079 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:40:00.745085 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.745090 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:40:00.745095 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.745101 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:40:00.745106 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.745111 | orchestrator | 2026-02-23 20:40:00.745116 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-23 20:40:00.745126 | orchestrator | Monday 23 February 2026 20:39:15 +0000 (0:00:01.161) 0:09:47.407 ******* 2026-02-23 20:40:00.745129 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-23 20:40:00.745132 | orchestrator | 2026-02-23 20:40:00.745135 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-23 20:40:00.745151 | orchestrator | Monday 23 February 2026 20:39:15 +0000 (0:00:00.214) 0:09:47.622 ******* 2026-02-23 20:40:00.745157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745187 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745190 | orchestrator | 2026-02-23 20:40:00.745193 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-23 20:40:00.745197 | orchestrator | Monday 23 February 2026 20:39:16 +0000 (0:00:00.847) 0:09:48.470 ******* 2026-02-23 20:40:00.745200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-23 20:40:00.745215 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745219 | orchestrator | 2026-02-23 20:40:00.745224 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-23 20:40:00.745230 | orchestrator | Monday 23 February 2026 20:39:17 +0000 (0:00:01.126) 0:09:49.596 ******* 2026-02-23 20:40:00.745234 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-23 20:40:00.745239 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-23 20:40:00.745242 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-23 20:40:00.745245 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-23 20:40:00.745249 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-23 20:40:00.745252 | orchestrator | 2026-02-23 20:40:00.745255 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-23 20:40:00.745258 | orchestrator | Monday 23 February 2026 20:39:47 +0000 (0:00:30.008) 0:10:19.604 ******* 2026-02-23 20:40:00.745261 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745264 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.745267 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.745270 | orchestrator | 2026-02-23 20:40:00.745274 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-23 20:40:00.745277 | orchestrator | Monday 23 February 2026 20:39:48 +0000 (0:00:00.340) 0:10:19.945 ******* 2026-02-23 20:40:00.745280 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745283 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.745286 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.745289 | orchestrator | 2026-02-23 20:40:00.745292 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-23 20:40:00.745295 | orchestrator | Monday 23 February 2026 20:39:48 +0000 (0:00:00.335) 0:10:20.281 ******* 2026-02-23 20:40:00.745298 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.745304 | orchestrator | 2026-02-23 20:40:00.745307 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-23 20:40:00.745310 | orchestrator | Monday 23 February 2026 20:39:49 +0000 (0:00:00.842) 0:10:21.123 ******* 2026-02-23 20:40:00.745313 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.745316 | orchestrator | 2026-02-23 20:40:00.745322 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-23 20:40:00.745325 | orchestrator | Monday 23 February 2026 20:39:49 +0000 (0:00:00.541) 0:10:21.665 ******* 2026-02-23 20:40:00.745328 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.745331 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.745334 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.745337 | orchestrator | 2026-02-23 20:40:00.745341 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-23 20:40:00.745344 | orchestrator | Monday 23 February 2026 20:39:50 +0000 (0:00:01.191) 0:10:22.856 ******* 2026-02-23 20:40:00.745347 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.745350 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.745353 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.745356 | orchestrator | 2026-02-23 20:40:00.745359 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-23 20:40:00.745362 | orchestrator | Monday 23 February 2026 20:39:52 +0000 (0:00:01.416) 0:10:24.273 ******* 2026-02-23 20:40:00.745365 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:40:00.745368 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:40:00.745371 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:40:00.745374 | orchestrator | 2026-02-23 20:40:00.745377 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-23 20:40:00.745380 | orchestrator | Monday 23 February 2026 20:39:54 +0000 (0:00:01.850) 0:10:26.123 ******* 2026-02-23 20:40:00.745383 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.745387 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.745390 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-23 20:40:00.745393 | orchestrator | 2026-02-23 20:40:00.745396 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-23 20:40:00.745399 | orchestrator | Monday 23 February 2026 20:39:56 +0000 (0:00:02.521) 0:10:28.645 ******* 2026-02-23 20:40:00.745402 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745405 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.745408 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.745411 | orchestrator | 2026-02-23 20:40:00.745414 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-23 20:40:00.745417 | orchestrator | Monday 23 February 2026 20:39:57 +0000 (0:00:00.359) 0:10:29.004 ******* 2026-02-23 20:40:00.745420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:40:00.745423 | orchestrator | 2026-02-23 20:40:00.745426 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-23 20:40:00.745429 | orchestrator | Monday 23 February 2026 20:39:57 +0000 (0:00:00.523) 0:10:29.527 ******* 2026-02-23 20:40:00.745432 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.745435 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.745438 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.745442 | orchestrator | 2026-02-23 20:40:00.745445 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-23 20:40:00.745448 | orchestrator | Monday 23 February 2026 20:39:58 +0000 (0:00:00.599) 0:10:30.127 ******* 2026-02-23 20:40:00.745453 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745457 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:40:00.745461 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:40:00.745464 | orchestrator | 2026-02-23 20:40:00.745468 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-23 20:40:00.745471 | orchestrator | Monday 23 February 2026 20:39:58 +0000 (0:00:00.350) 0:10:30.478 ******* 2026-02-23 20:40:00.745474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:40:00.745477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:40:00.745480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:40:00.745483 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:40:00.745486 | orchestrator | 2026-02-23 20:40:00.745489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-23 20:40:00.745492 | orchestrator | Monday 23 February 2026 20:39:59 +0000 (0:00:00.624) 0:10:31.102 ******* 2026-02-23 20:40:00.745495 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:40:00.745498 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:40:00.745501 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:40:00.745504 | orchestrator | 2026-02-23 20:40:00.745507 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:40:00.745511 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-23 20:40:00.745514 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-23 20:40:00.745517 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-23 20:40:00.745520 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-23 20:40:00.745524 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-23 20:40:00.745529 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-23 20:40:00.745532 | orchestrator | 2026-02-23 20:40:00.745535 | orchestrator | 2026-02-23 20:40:00.745538 | orchestrator | 2026-02-23 20:40:00.745541 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:40:00.745545 | orchestrator | Monday 23 February 2026 20:39:59 +0000 (0:00:00.231) 0:10:31.334 ******* 2026-02-23 20:40:00.745548 | orchestrator | =============================================================================== 2026-02-23 20:40:00.745551 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.34s 2026-02-23 20:40:00.745554 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 37.04s 2026-02-23 20:40:00.745557 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.01s 2026-02-23 20:40:00.745560 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.84s 2026-02-23 20:40:00.745563 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.69s 2026-02-23 20:40:00.745566 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.86s 2026-02-23 20:40:00.745569 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.38s 2026-02-23 20:40:00.745572 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.20s 2026-02-23 20:40:00.745576 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.61s 2026-02-23 20:40:00.745579 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.12s 2026-02-23 20:40:00.745582 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.95s 2026-02-23 20:40:00.745588 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.50s 2026-02-23 20:40:00.745591 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.18s 2026-02-23 20:40:00.745594 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.86s 2026-02-23 20:40:00.745597 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.24s 2026-02-23 20:40:00.745600 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.23s 2026-02-23 20:40:00.745603 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.87s 2026-02-23 20:40:00.745606 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.84s 2026-02-23 20:40:00.745609 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.67s 2026-02-23 20:40:00.745612 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.64s 2026-02-23 20:40:00.745615 | orchestrator | 2026-02-23 20:40:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:03.783086 | orchestrator | 2026-02-23 20:40:03 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:03.784562 | orchestrator | 2026-02-23 20:40:03 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:03.786445 | orchestrator | 2026-02-23 20:40:03 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:03.786845 | orchestrator | 2026-02-23 20:40:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:06.831328 | orchestrator | 2026-02-23 20:40:06 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:06.832446 | orchestrator | 2026-02-23 20:40:06 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:06.833611 | orchestrator | 2026-02-23 20:40:06 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:06.833644 | orchestrator | 2026-02-23 20:40:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:09.881548 | orchestrator | 2026-02-23 20:40:09 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:09.883359 | orchestrator | 2026-02-23 20:40:09 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:09.885449 | orchestrator | 2026-02-23 20:40:09 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:09.885490 | orchestrator | 2026-02-23 20:40:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:12.938243 | orchestrator | 2026-02-23 20:40:12 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:12.942637 | orchestrator | 2026-02-23 20:40:12 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:12.944745 | orchestrator | 2026-02-23 20:40:12 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:12.944894 | orchestrator | 2026-02-23 20:40:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:15.991412 | orchestrator | 2026-02-23 20:40:15 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:15.992997 | orchestrator | 2026-02-23 20:40:15 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:15.994945 | orchestrator | 2026-02-23 20:40:15 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:15.994976 | orchestrator | 2026-02-23 20:40:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:19.038925 | orchestrator | 2026-02-23 20:40:19 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state STARTED 2026-02-23 20:40:19.039919 | orchestrator | 2026-02-23 20:40:19 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:19.042435 | orchestrator | 2026-02-23 20:40:19 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:19.042519 | orchestrator | 2026-02-23 20:40:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:22.102009 | orchestrator | 2026-02-23 20:40:22 | INFO  | Task a3bac2b4-edaa-4ca6-b851-848c88d8a8fe is in state SUCCESS 2026-02-23 20:40:22.103562 | orchestrator | 2026-02-23 20:40:22.103674 | orchestrator | 2026-02-23 20:40:22.103683 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:40:22.103688 | orchestrator | 2026-02-23 20:40:22.103693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:40:22.103697 | orchestrator | Monday 23 February 2026 20:37:53 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-02-23 20:40:22.103701 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:22.103707 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:22.103711 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:22.103715 | orchestrator | 2026-02-23 20:40:22.103719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:40:22.103723 | orchestrator | Monday 23 February 2026 20:37:53 +0000 (0:00:00.264) 0:00:00.540 ******* 2026-02-23 20:40:22.103742 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-23 20:40:22.103749 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-23 20:40:22.103762 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-23 20:40:22.103768 | orchestrator | 2026-02-23 20:40:22.103775 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-23 20:40:22.103781 | orchestrator | 2026-02-23 20:40:22.103787 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-23 20:40:22.103793 | orchestrator | Monday 23 February 2026 20:37:54 +0000 (0:00:00.347) 0:00:00.888 ******* 2026-02-23 20:40:22.103800 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:22.103806 | orchestrator | 2026-02-23 20:40:22.103812 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-23 20:40:22.103818 | orchestrator | Monday 23 February 2026 20:37:54 +0000 (0:00:00.444) 0:00:01.332 ******* 2026-02-23 20:40:22.103825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-23 20:40:22.103831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-23 20:40:22.103837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-23 20:40:22.103843 | orchestrator | 2026-02-23 20:40:22.103860 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-23 20:40:22.103885 | orchestrator | Monday 23 February 2026 20:37:56 +0000 (0:00:01.644) 0:00:02.976 ******* 2026-02-23 20:40:22.103892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.103900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.103936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.103946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.103958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.103966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.103978 | orchestrator | 2026-02-23 20:40:22.103985 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-23 20:40:22.103991 | orchestrator | Monday 23 February 2026 20:37:58 +0000 (0:00:01.742) 0:00:04.718 ******* 2026-02-23 20:40:22.103997 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:22.104003 | orchestrator | 2026-02-23 20:40:22.104010 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-23 20:40:22.104015 | orchestrator | Monday 23 February 2026 20:37:58 +0000 (0:00:00.455) 0:00:05.174 ******* 2026-02-23 20:40:22.104026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104076 | orchestrator | 2026-02-23 20:40:22.104084 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-23 20:40:22.104089 | orchestrator | Monday 23 February 2026 20:38:01 +0000 (0:00:02.915) 0:00:08.090 ******* 2026-02-23 20:40:22.104104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:40:22.104142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:40:22.104149 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:22.104156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:40:22.104167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:40:22.104173 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:22.104183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:40:22.104193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:40:22.104199 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:22.104205 | orchestrator | 2026-02-23 20:40:22.104210 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-23 20:40:22.104216 | orchestrator | Monday 23 February 2026 20:38:02 +0000 (0:00:01.041) 0:00:09.131 ******* 2026-02-23 20:40:22.104222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:40:22.104235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:40:22.104246 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:22.104257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:40:22.104269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:40:22.104278 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:22.104283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-23 20:40:22.104295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-23 20:40:22.104302 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:22.104308 | orchestrator | 2026-02-23 20:40:22.104314 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-23 20:40:22.104320 | orchestrator | Monday 23 February 2026 20:38:03 +0000 (0:00:00.744) 0:00:09.876 ******* 2026-02-23 20:40:22.104332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104414 | orchestrator | 2026-02-23 20:40:22.104420 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-23 20:40:22.104426 | orchestrator | Monday 23 February 2026 20:38:05 +0000 (0:00:02.489) 0:00:12.366 ******* 2026-02-23 20:40:22.104430 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:22.104435 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:22.104439 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:22.104443 | orchestrator | 2026-02-23 20:40:22.104447 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-23 20:40:22.104452 | orchestrator | Monday 23 February 2026 20:38:08 +0000 (0:00:02.443) 0:00:14.810 ******* 2026-02-23 20:40:22.104456 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:22.104460 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:22.104464 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:22.104469 | orchestrator | 2026-02-23 20:40:22.104474 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-23 20:40:22.104480 | orchestrator | Monday 23 February 2026 20:38:10 +0000 (0:00:02.144) 0:00:16.954 ******* 2026-02-23 20:40:22.104487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-23 20:40:22.104526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-23 20:40:22.104557 | orchestrator | 2026-02-23 20:40:22.104563 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-23 20:40:22.104569 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:01.999) 0:00:18.953 ******* 2026-02-23 20:40:22.104574 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:22.104580 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:22.104586 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:22.104592 | orchestrator | 2026-02-23 20:40:22.104598 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-23 20:40:22.104605 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.248) 0:00:19.202 ******* 2026-02-23 20:40:22.104611 | orchestrator | 2026-02-23 20:40:22.104617 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-23 20:40:22.104624 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.060) 0:00:19.262 ******* 2026-02-23 20:40:22.104631 | orchestrator | 2026-02-23 20:40:22.104637 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-23 20:40:22.104644 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.061) 0:00:19.324 ******* 2026-02-23 20:40:22.104650 | orchestrator | 2026-02-23 20:40:22.104656 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-23 20:40:22.104662 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.074) 0:00:19.399 ******* 2026-02-23 20:40:22.104669 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:22.104675 | orchestrator | 2026-02-23 20:40:22.104683 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-23 20:40:22.104687 | orchestrator | Monday 23 February 2026 20:38:13 +0000 (0:00:00.632) 0:00:20.031 ******* 2026-02-23 20:40:22.104691 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:22.104695 | orchestrator | 2026-02-23 20:40:22.104699 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-23 20:40:22.104703 | orchestrator | Monday 23 February 2026 20:38:13 +0000 (0:00:00.319) 0:00:20.351 ******* 2026-02-23 20:40:22.104707 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:22.104711 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:22.104714 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:22.104718 | orchestrator | 2026-02-23 20:40:22.104722 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-23 20:40:22.104726 | orchestrator | Monday 23 February 2026 20:39:07 +0000 (0:00:53.635) 0:01:13.986 ******* 2026-02-23 20:40:22.104730 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:22.104733 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:22.104737 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:22.104741 | orchestrator | 2026-02-23 20:40:22.104745 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-23 20:40:22.104749 | orchestrator | Monday 23 February 2026 20:40:06 +0000 (0:00:59.626) 0:02:13.613 ******* 2026-02-23 20:40:22.104752 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:22.104757 | orchestrator | 2026-02-23 20:40:22.104760 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-23 20:40:22.104764 | orchestrator | Monday 23 February 2026 20:40:07 +0000 (0:00:00.678) 0:02:14.291 ******* 2026-02-23 20:40:22.104768 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:22.104772 | orchestrator | 2026-02-23 20:40:22.104776 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-02-23 20:40:22.104780 | orchestrator | Monday 23 February 2026 20:40:09 +0000 (0:00:02.351) 0:02:16.643 ******* 2026-02-23 20:40:22.104783 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:22.104787 | orchestrator | 2026-02-23 20:40:22.104791 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-23 20:40:22.104795 | orchestrator | Monday 23 February 2026 20:40:12 +0000 (0:00:02.298) 0:02:18.941 ******* 2026-02-23 20:40:22.104802 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:22.104805 | orchestrator | 2026-02-23 20:40:22.104809 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-23 20:40:22.104813 | orchestrator | Monday 23 February 2026 20:40:14 +0000 (0:00:02.195) 0:02:21.136 ******* 2026-02-23 20:40:22.104817 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:22.104820 | orchestrator | 2026-02-23 20:40:22.104824 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-23 20:40:22.104828 | orchestrator | Monday 23 February 2026 20:40:16 +0000 (0:00:02.417) 0:02:23.554 ******* 2026-02-23 20:40:22.104831 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:22.104835 | orchestrator | 2026-02-23 20:40:22.104839 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:40:22.104844 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:40:22.104850 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:40:22.104857 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 20:40:22.104861 | orchestrator | 2026-02-23 20:40:22.104864 | orchestrator | 2026-02-23 20:40:22.104868 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:40:22.104872 | orchestrator | Monday 23 February 2026 20:40:19 +0000 (0:00:02.553) 0:02:26.108 ******* 2026-02-23 20:40:22.104876 | orchestrator | =============================================================================== 2026-02-23 20:40:22.104880 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 59.63s 2026-02-23 20:40:22.104883 | orchestrator | opensearch : Restart opensearch container ------------------------------ 53.64s 2026-02-23 20:40:22.104887 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.92s 2026-02-23 20:40:22.104891 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.55s 2026-02-23 20:40:22.104895 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.49s 2026-02-23 20:40:22.104898 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.44s 2026-02-23 20:40:22.104902 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.42s 2026-02-23 20:40:22.104906 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.35s 2026-02-23 20:40:22.104910 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.30s 2026-02-23 20:40:22.104913 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.20s 2026-02-23 20:40:22.104917 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.14s 2026-02-23 20:40:22.104921 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.00s 2026-02-23 20:40:22.104925 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2026-02-23 20:40:22.104929 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.64s 2026-02-23 20:40:22.104932 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.04s 2026-02-23 20:40:22.104936 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.74s 2026-02-23 20:40:22.104944 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-02-23 20:40:22.104947 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.63s 2026-02-23 20:40:22.104951 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-02-23 20:40:22.104955 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2026-02-23 20:40:22.104959 | orchestrator | 2026-02-23 20:40:22 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:22.106749 | orchestrator | 2026-02-23 20:40:22 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:22.106795 | orchestrator | 2026-02-23 20:40:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:25.159320 | orchestrator | 2026-02-23 20:40:25 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:25.162780 | orchestrator | 2026-02-23 20:40:25 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:25.162828 | orchestrator | 2026-02-23 20:40:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:28.216589 | orchestrator | 2026-02-23 20:40:28 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:28.217954 | orchestrator | 2026-02-23 20:40:28 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:28.217998 | orchestrator | 2026-02-23 20:40:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:31.271988 | orchestrator | 2026-02-23 20:40:31 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:31.275176 | orchestrator | 2026-02-23 20:40:31 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:31.275347 | orchestrator | 2026-02-23 20:40:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:34.327298 | orchestrator | 2026-02-23 20:40:34 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:34.329116 | orchestrator | 2026-02-23 20:40:34 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:34.329164 | orchestrator | 2026-02-23 20:40:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:37.374645 | orchestrator | 2026-02-23 20:40:37 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state STARTED 2026-02-23 20:40:37.375571 | orchestrator | 2026-02-23 20:40:37 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:37.375614 | orchestrator | 2026-02-23 20:40:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:40.426503 | orchestrator | 2026-02-23 20:40:40 | INFO  | Task 4be2e505-3623-43d4-9526-b5c161fb9659 is in state SUCCESS 2026-02-23 20:40:40.428340 | orchestrator | 2026-02-23 20:40:40.428415 | orchestrator | 2026-02-23 20:40:40.428425 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-23 20:40:40.428434 | orchestrator | 2026-02-23 20:40:40.428440 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-23 20:40:40.428447 | orchestrator | Monday 23 February 2026 20:37:53 +0000 (0:00:00.135) 0:00:00.135 ******* 2026-02-23 20:40:40.428464 | orchestrator | ok: [localhost] => { 2026-02-23 20:40:40.428472 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-23 20:40:40.428634 | orchestrator | } 2026-02-23 20:40:40.428650 | orchestrator | 2026-02-23 20:40:40.428658 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-23 20:40:40.428665 | orchestrator | Monday 23 February 2026 20:37:53 +0000 (0:00:00.059) 0:00:00.195 ******* 2026-02-23 20:40:40.428672 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-23 20:40:40.428680 | orchestrator | ...ignoring 2026-02-23 20:40:40.428687 | orchestrator | 2026-02-23 20:40:40.428693 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-23 20:40:40.428700 | orchestrator | Monday 23 February 2026 20:37:56 +0000 (0:00:02.818) 0:00:03.013 ******* 2026-02-23 20:40:40.428805 | orchestrator | skipping: [localhost] 2026-02-23 20:40:40.428814 | orchestrator | 2026-02-23 20:40:40.428820 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-23 20:40:40.428827 | orchestrator | Monday 23 February 2026 20:37:56 +0000 (0:00:00.094) 0:00:03.107 ******* 2026-02-23 20:40:40.428833 | orchestrator | ok: [localhost] 2026-02-23 20:40:40.428840 | orchestrator | 2026-02-23 20:40:40.428846 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:40:40.428851 | orchestrator | 2026-02-23 20:40:40.428857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:40:40.428863 | orchestrator | Monday 23 February 2026 20:37:56 +0000 (0:00:00.146) 0:00:03.253 ******* 2026-02-23 20:40:40.428869 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.428879 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.428885 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.428891 | orchestrator | 2026-02-23 20:40:40.428898 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:40:40.428905 | orchestrator | Monday 23 February 2026 20:37:57 +0000 (0:00:00.386) 0:00:03.640 ******* 2026-02-23 20:40:40.428912 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-23 20:40:40.428920 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-23 20:40:40.428927 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-23 20:40:40.428934 | orchestrator | 2026-02-23 20:40:40.428942 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-23 20:40:40.428949 | orchestrator | 2026-02-23 20:40:40.428956 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-23 20:40:40.428964 | orchestrator | Monday 23 February 2026 20:37:57 +0000 (0:00:00.488) 0:00:04.128 ******* 2026-02-23 20:40:40.428972 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-23 20:40:40.428979 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-23 20:40:40.428986 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-23 20:40:40.428992 | orchestrator | 2026-02-23 20:40:40.429000 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-23 20:40:40.429007 | orchestrator | Monday 23 February 2026 20:37:57 +0000 (0:00:00.357) 0:00:04.486 ******* 2026-02-23 20:40:40.429119 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:40.429210 | orchestrator | 2026-02-23 20:40:40.429217 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-23 20:40:40.429224 | orchestrator | Monday 23 February 2026 20:37:58 +0000 (0:00:00.463) 0:00:04.949 ******* 2026-02-23 20:40:40.429253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429302 | orchestrator | 2026-02-23 20:40:40.429315 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-23 20:40:40.429322 | orchestrator | Monday 23 February 2026 20:38:01 +0000 (0:00:03.258) 0:00:08.208 ******* 2026-02-23 20:40:40.429327 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.429334 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.429340 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.429347 | orchestrator | 2026-02-23 20:40:40.429353 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-23 20:40:40.429359 | orchestrator | Monday 23 February 2026 20:38:02 +0000 (0:00:00.753) 0:00:08.962 ******* 2026-02-23 20:40:40.429366 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.429373 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.429380 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.429386 | orchestrator | 2026-02-23 20:40:40.429392 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-23 20:40:40.429398 | orchestrator | Monday 23 February 2026 20:38:03 +0000 (0:00:01.455) 0:00:10.417 ******* 2026-02-23 20:40:40.429408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429443 | orchestrator | 2026-02-23 20:40:40.429449 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-23 20:40:40.429456 | orchestrator | Monday 23 February 2026 20:38:07 +0000 (0:00:03.216) 0:00:13.634 ******* 2026-02-23 20:40:40.429463 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.429469 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.429476 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.429482 | orchestrator | 2026-02-23 20:40:40.429488 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-23 20:40:40.429494 | orchestrator | Monday 23 February 2026 20:38:08 +0000 (0:00:01.166) 0:00:14.800 ******* 2026-02-23 20:40:40.429499 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:40.429505 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.429511 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:40.429517 | orchestrator | 2026-02-23 20:40:40.429523 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-23 20:40:40.429530 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:04.111) 0:00:18.911 ******* 2026-02-23 20:40:40.429537 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:40.429544 | orchestrator | 2026-02-23 20:40:40.429550 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-23 20:40:40.429561 | orchestrator | Monday 23 February 2026 20:38:12 +0000 (0:00:00.468) 0:00:19.379 ******* 2026-02-23 20:40:40.429573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429581 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.429591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429598 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.429610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429633 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.429639 | orchestrator | 2026-02-23 20:40:40.429645 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-23 20:40:40.429651 | orchestrator | Monday 23 February 2026 20:38:16 +0000 (0:00:03.332) 0:00:22.711 ******* 2026-02-23 20:40:40.429660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429668 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.429679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429691 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.429702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429709 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.429716 | orchestrator | 2026-02-23 20:40:40.429723 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-23 20:40:40.429730 | orchestrator | Monday 23 February 2026 20:38:18 +0000 (0:00:02.698) 0:00:25.410 ******* 2026-02-23 20:40:40.429736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429747 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.429764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429771 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.429777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-23 20:40:40.429788 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.429794 | orchestrator | 2026-02-23 20:40:40.429800 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-23 20:40:40.429805 | orchestrator | Monday 23 February 2026 20:38:21 +0000 (0:00:02.483) 0:00:27.894 ******* 2026-02-23 20:40:40.429817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-23 20:40:40.429852 | orchestrator | 2026-02-23 20:40:40.429858 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-23 20:40:40.429863 | orchestrator | Monday 23 February 2026 20:38:24 +0000 (0:00:03.029) 0:00:30.923 ******* 2026-02-23 20:40:40.429869 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.429875 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:40.429880 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:40.429886 | orchestrator | 2026-02-23 20:40:40.429891 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-23 20:40:40.429897 | orchestrator | Monday 23 February 2026 20:38:25 +0000 (0:00:00.803) 0:00:31.727 ******* 2026-02-23 20:40:40.429903 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.429910 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.429917 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.429924 | orchestrator | 2026-02-23 20:40:40.429934 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-23 20:40:40.429945 | orchestrator | Monday 23 February 2026 20:38:25 +0000 (0:00:00.563) 0:00:32.291 ******* 2026-02-23 20:40:40.429951 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.429957 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.429963 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.429968 | orchestrator | 2026-02-23 20:40:40.429974 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-23 20:40:40.429981 | orchestrator | Monday 23 February 2026 20:38:26 +0000 (0:00:00.339) 0:00:32.631 ******* 2026-02-23 20:40:40.429988 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-23 20:40:40.429994 | orchestrator | ...ignoring 2026-02-23 20:40:40.430001 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-23 20:40:40.430007 | orchestrator | ...ignoring 2026-02-23 20:40:40.430164 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-23 20:40:40.430176 | orchestrator | ...ignoring 2026-02-23 20:40:40.430183 | orchestrator | 2026-02-23 20:40:40.430189 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-23 20:40:40.430196 | orchestrator | Monday 23 February 2026 20:38:37 +0000 (0:00:10.985) 0:00:43.616 ******* 2026-02-23 20:40:40.430201 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.430207 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.430214 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.430220 | orchestrator | 2026-02-23 20:40:40.430226 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-23 20:40:40.430233 | orchestrator | Monday 23 February 2026 20:38:37 +0000 (0:00:00.392) 0:00:44.008 ******* 2026-02-23 20:40:40.430239 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430245 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430251 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430257 | orchestrator | 2026-02-23 20:40:40.430263 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-23 20:40:40.430270 | orchestrator | Monday 23 February 2026 20:38:37 +0000 (0:00:00.525) 0:00:44.533 ******* 2026-02-23 20:40:40.430276 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430282 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430288 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430295 | orchestrator | 2026-02-23 20:40:40.430301 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-23 20:40:40.430307 | orchestrator | Monday 23 February 2026 20:38:38 +0000 (0:00:00.367) 0:00:44.901 ******* 2026-02-23 20:40:40.430314 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430320 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430326 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430332 | orchestrator | 2026-02-23 20:40:40.430338 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-23 20:40:40.430344 | orchestrator | Monday 23 February 2026 20:38:38 +0000 (0:00:00.386) 0:00:45.287 ******* 2026-02-23 20:40:40.430351 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.430358 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.430365 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.430373 | orchestrator | 2026-02-23 20:40:40.430380 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-23 20:40:40.430388 | orchestrator | Monday 23 February 2026 20:38:39 +0000 (0:00:00.366) 0:00:45.654 ******* 2026-02-23 20:40:40.430405 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430411 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430418 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430424 | orchestrator | 2026-02-23 20:40:40.430430 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-23 20:40:40.430447 | orchestrator | Monday 23 February 2026 20:38:39 +0000 (0:00:00.523) 0:00:46.177 ******* 2026-02-23 20:40:40.430453 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430460 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430468 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-23 20:40:40.430475 | orchestrator | 2026-02-23 20:40:40.430482 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-23 20:40:40.430488 | orchestrator | Monday 23 February 2026 20:38:39 +0000 (0:00:00.323) 0:00:46.501 ******* 2026-02-23 20:40:40.430494 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.430499 | orchestrator | 2026-02-23 20:40:40.430505 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-23 20:40:40.430511 | orchestrator | Monday 23 February 2026 20:38:49 +0000 (0:00:09.807) 0:00:56.308 ******* 2026-02-23 20:40:40.430517 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.430523 | orchestrator | 2026-02-23 20:40:40.430529 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-23 20:40:40.430534 | orchestrator | Monday 23 February 2026 20:38:49 +0000 (0:00:00.116) 0:00:56.425 ******* 2026-02-23 20:40:40.430540 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430546 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430552 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430558 | orchestrator | 2026-02-23 20:40:40.430564 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-23 20:40:40.430570 | orchestrator | Monday 23 February 2026 20:38:50 +0000 (0:00:00.892) 0:00:57.318 ******* 2026-02-23 20:40:40.430576 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.430581 | orchestrator | 2026-02-23 20:40:40.430588 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-23 20:40:40.430594 | orchestrator | Monday 23 February 2026 20:38:57 +0000 (0:00:07.071) 0:01:04.390 ******* 2026-02-23 20:40:40.430601 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.430607 | orchestrator | 2026-02-23 20:40:40.430613 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-23 20:40:40.430626 | orchestrator | Monday 23 February 2026 20:38:59 +0000 (0:00:01.526) 0:01:05.917 ******* 2026-02-23 20:40:40.430632 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.430637 | orchestrator | 2026-02-23 20:40:40.430644 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-23 20:40:40.430650 | orchestrator | Monday 23 February 2026 20:39:01 +0000 (0:00:02.186) 0:01:08.103 ******* 2026-02-23 20:40:40.430655 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.430661 | orchestrator | 2026-02-23 20:40:40.430668 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-23 20:40:40.430675 | orchestrator | Monday 23 February 2026 20:39:01 +0000 (0:00:00.119) 0:01:08.223 ******* 2026-02-23 20:40:40.430682 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430688 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.430695 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.430701 | orchestrator | 2026-02-23 20:40:40.430708 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-23 20:40:40.430714 | orchestrator | Monday 23 February 2026 20:39:01 +0000 (0:00:00.306) 0:01:08.529 ******* 2026-02-23 20:40:40.430720 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.430726 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:40.430731 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:40.430737 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-23 20:40:40.430743 | orchestrator | 2026-02-23 20:40:40.430749 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-23 20:40:40.430755 | orchestrator | skipping: no hosts matched 2026-02-23 20:40:40.430762 | orchestrator | 2026-02-23 20:40:40.430768 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-23 20:40:40.430784 | orchestrator | 2026-02-23 20:40:40.430790 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-23 20:40:40.430796 | orchestrator | Monday 23 February 2026 20:39:02 +0000 (0:00:00.437) 0:01:08.967 ******* 2026-02-23 20:40:40.430802 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:40:40.430809 | orchestrator | 2026-02-23 20:40:40.430816 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-23 20:40:40.430824 | orchestrator | Monday 23 February 2026 20:39:17 +0000 (0:00:15.044) 0:01:24.012 ******* 2026-02-23 20:40:40.430832 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.430838 | orchestrator | 2026-02-23 20:40:40.430846 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-23 20:40:40.430853 | orchestrator | Monday 23 February 2026 20:39:32 +0000 (0:00:15.534) 0:01:39.546 ******* 2026-02-23 20:40:40.430860 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.430866 | orchestrator | 2026-02-23 20:40:40.430873 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-23 20:40:40.430880 | orchestrator | 2026-02-23 20:40:40.430887 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-23 20:40:40.430894 | orchestrator | Monday 23 February 2026 20:39:35 +0000 (0:00:02.155) 0:01:41.701 ******* 2026-02-23 20:40:40.430899 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:40:40.430906 | orchestrator | 2026-02-23 20:40:40.430912 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-23 20:40:40.430918 | orchestrator | Monday 23 February 2026 20:39:50 +0000 (0:00:15.588) 0:01:57.290 ******* 2026-02-23 20:40:40.430924 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.430930 | orchestrator | 2026-02-23 20:40:40.430936 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-23 20:40:40.430942 | orchestrator | Monday 23 February 2026 20:40:05 +0000 (0:00:14.519) 0:02:11.810 ******* 2026-02-23 20:40:40.430947 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.430953 | orchestrator | 2026-02-23 20:40:40.430959 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-23 20:40:40.430965 | orchestrator | 2026-02-23 20:40:40.430981 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-23 20:40:40.430989 | orchestrator | Monday 23 February 2026 20:40:07 +0000 (0:00:02.449) 0:02:14.259 ******* 2026-02-23 20:40:40.430995 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.431000 | orchestrator | 2026-02-23 20:40:40.431006 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-23 20:40:40.431012 | orchestrator | Monday 23 February 2026 20:40:23 +0000 (0:00:15.875) 0:02:30.135 ******* 2026-02-23 20:40:40.431018 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.431025 | orchestrator | 2026-02-23 20:40:40.431031 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-23 20:40:40.431037 | orchestrator | Monday 23 February 2026 20:40:24 +0000 (0:00:00.604) 0:02:30.740 ******* 2026-02-23 20:40:40.431043 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.431049 | orchestrator | 2026-02-23 20:40:40.431054 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-23 20:40:40.431119 | orchestrator | 2026-02-23 20:40:40.431128 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-23 20:40:40.431135 | orchestrator | Monday 23 February 2026 20:40:26 +0000 (0:00:02.690) 0:02:33.430 ******* 2026-02-23 20:40:40.431142 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:40:40.431148 | orchestrator | 2026-02-23 20:40:40.431154 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-23 20:40:40.431160 | orchestrator | Monday 23 February 2026 20:40:27 +0000 (0:00:00.522) 0:02:33.953 ******* 2026-02-23 20:40:40.431167 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.431174 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.431180 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.431187 | orchestrator | 2026-02-23 20:40:40.431202 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-23 20:40:40.431209 | orchestrator | Monday 23 February 2026 20:40:29 +0000 (0:00:02.314) 0:02:36.267 ******* 2026-02-23 20:40:40.431215 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.431222 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.431229 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.431234 | orchestrator | 2026-02-23 20:40:40.431241 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-23 20:40:40.431247 | orchestrator | Monday 23 February 2026 20:40:31 +0000 (0:00:02.275) 0:02:38.543 ******* 2026-02-23 20:40:40.431260 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.431267 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.431273 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.431279 | orchestrator | 2026-02-23 20:40:40.431285 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-23 20:40:40.431292 | orchestrator | Monday 23 February 2026 20:40:33 +0000 (0:00:02.019) 0:02:40.563 ******* 2026-02-23 20:40:40.431298 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.431305 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.431311 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:40:40.431317 | orchestrator | 2026-02-23 20:40:40.431323 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-23 20:40:40.431329 | orchestrator | Monday 23 February 2026 20:40:36 +0000 (0:00:02.401) 0:02:42.964 ******* 2026-02-23 20:40:40.431335 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:40:40.431341 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:40:40.431347 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:40:40.431353 | orchestrator | 2026-02-23 20:40:40.431359 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-23 20:40:40.431365 | orchestrator | Monday 23 February 2026 20:40:39 +0000 (0:00:03.054) 0:02:46.018 ******* 2026-02-23 20:40:40.431371 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:40:40.431377 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:40:40.431382 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:40:40.431388 | orchestrator | 2026-02-23 20:40:40.431394 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:40:40.431401 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-23 20:40:40.431409 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-23 20:40:40.431416 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-23 20:40:40.431423 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-23 20:40:40.431430 | orchestrator | 2026-02-23 20:40:40.431437 | orchestrator | 2026-02-23 20:40:40.431443 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:40:40.431449 | orchestrator | Monday 23 February 2026 20:40:39 +0000 (0:00:00.218) 0:02:46.237 ******* 2026-02-23 20:40:40.431454 | orchestrator | =============================================================================== 2026-02-23 20:40:40.431460 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 30.63s 2026-02-23 20:40:40.431466 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 30.05s 2026-02-23 20:40:40.431472 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.88s 2026-02-23 20:40:40.431478 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.99s 2026-02-23 20:40:40.431485 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.81s 2026-02-23 20:40:40.431491 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.07s 2026-02-23 20:40:40.431517 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.61s 2026-02-23 20:40:40.431525 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.11s 2026-02-23 20:40:40.431532 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.33s 2026-02-23 20:40:40.431539 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.26s 2026-02-23 20:40:40.431546 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.22s 2026-02-23 20:40:40.431553 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.05s 2026-02-23 20:40:40.431559 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.03s 2026-02-23 20:40:40.431566 | orchestrator | Check MariaDB service --------------------------------------------------- 2.82s 2026-02-23 20:40:40.431571 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.70s 2026-02-23 20:40:40.431578 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.69s 2026-02-23 20:40:40.431584 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.48s 2026-02-23 20:40:40.431590 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.40s 2026-02-23 20:40:40.431597 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.31s 2026-02-23 20:40:40.431603 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.28s 2026-02-23 20:40:40.431610 | orchestrator | 2026-02-23 20:40:40 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:40.431618 | orchestrator | 2026-02-23 20:40:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:43.497475 | orchestrator | 2026-02-23 20:40:43 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:40:43.499518 | orchestrator | 2026-02-23 20:40:43 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:43.501238 | orchestrator | 2026-02-23 20:40:43 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:40:43.501294 | orchestrator | 2026-02-23 20:40:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:46.526679 | orchestrator | 2026-02-23 20:40:46 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:40:46.529576 | orchestrator | 2026-02-23 20:40:46 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:46.531270 | orchestrator | 2026-02-23 20:40:46 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:40:46.531610 | orchestrator | 2026-02-23 20:40:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:49.560890 | orchestrator | 2026-02-23 20:40:49 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:40:49.562936 | orchestrator | 2026-02-23 20:40:49 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:49.565415 | orchestrator | 2026-02-23 20:40:49 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:40:49.565451 | orchestrator | 2026-02-23 20:40:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:52.607760 | orchestrator | 2026-02-23 20:40:52 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:40:52.608266 | orchestrator | 2026-02-23 20:40:52 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:52.608933 | orchestrator | 2026-02-23 20:40:52 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:40:52.608945 | orchestrator | 2026-02-23 20:40:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:55.644119 | orchestrator | 2026-02-23 20:40:55 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:40:55.645991 | orchestrator | 2026-02-23 20:40:55 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:55.646436 | orchestrator | 2026-02-23 20:40:55 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:40:55.646463 | orchestrator | 2026-02-23 20:40:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:40:58.688611 | orchestrator | 2026-02-23 20:40:58 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:40:58.691171 | orchestrator | 2026-02-23 20:40:58 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:40:58.694365 | orchestrator | 2026-02-23 20:40:58 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:40:58.694438 | orchestrator | 2026-02-23 20:40:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:01.730291 | orchestrator | 2026-02-23 20:41:01 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:01.731513 | orchestrator | 2026-02-23 20:41:01 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:01.732335 | orchestrator | 2026-02-23 20:41:01 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:01.732371 | orchestrator | 2026-02-23 20:41:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:04.778098 | orchestrator | 2026-02-23 20:41:04 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:04.779827 | orchestrator | 2026-02-23 20:41:04 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:04.781503 | orchestrator | 2026-02-23 20:41:04 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:04.782496 | orchestrator | 2026-02-23 20:41:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:07.823837 | orchestrator | 2026-02-23 20:41:07 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:07.824953 | orchestrator | 2026-02-23 20:41:07 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:07.827868 | orchestrator | 2026-02-23 20:41:07 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:07.827897 | orchestrator | 2026-02-23 20:41:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:10.867102 | orchestrator | 2026-02-23 20:41:10 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:10.867183 | orchestrator | 2026-02-23 20:41:10 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:10.867191 | orchestrator | 2026-02-23 20:41:10 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:10.867214 | orchestrator | 2026-02-23 20:41:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:13.908028 | orchestrator | 2026-02-23 20:41:13 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:13.909662 | orchestrator | 2026-02-23 20:41:13 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:13.911617 | orchestrator | 2026-02-23 20:41:13 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:13.911645 | orchestrator | 2026-02-23 20:41:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:16.944887 | orchestrator | 2026-02-23 20:41:16 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:16.946832 | orchestrator | 2026-02-23 20:41:16 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:16.947745 | orchestrator | 2026-02-23 20:41:16 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:16.947774 | orchestrator | 2026-02-23 20:41:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:19.993287 | orchestrator | 2026-02-23 20:41:19 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:19.995102 | orchestrator | 2026-02-23 20:41:19 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:19.997256 | orchestrator | 2026-02-23 20:41:19 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:19.997296 | orchestrator | 2026-02-23 20:41:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:23.030629 | orchestrator | 2026-02-23 20:41:23 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:23.032958 | orchestrator | 2026-02-23 20:41:23 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:23.035113 | orchestrator | 2026-02-23 20:41:23 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:23.035168 | orchestrator | 2026-02-23 20:41:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:26.077227 | orchestrator | 2026-02-23 20:41:26 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:26.078419 | orchestrator | 2026-02-23 20:41:26 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:26.080064 | orchestrator | 2026-02-23 20:41:26 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:26.080109 | orchestrator | 2026-02-23 20:41:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:29.125343 | orchestrator | 2026-02-23 20:41:29 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:29.128737 | orchestrator | 2026-02-23 20:41:29 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:29.130856 | orchestrator | 2026-02-23 20:41:29 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:29.130975 | orchestrator | 2026-02-23 20:41:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:32.175942 | orchestrator | 2026-02-23 20:41:32 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:32.177647 | orchestrator | 2026-02-23 20:41:32 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:32.181743 | orchestrator | 2026-02-23 20:41:32 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:32.181841 | orchestrator | 2026-02-23 20:41:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:35.221815 | orchestrator | 2026-02-23 20:41:35 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:35.223778 | orchestrator | 2026-02-23 20:41:35 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:35.223836 | orchestrator | 2026-02-23 20:41:35 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:35.223848 | orchestrator | 2026-02-23 20:41:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:38.259957 | orchestrator | 2026-02-23 20:41:38 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:38.261003 | orchestrator | 2026-02-23 20:41:38 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:38.262529 | orchestrator | 2026-02-23 20:41:38 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:38.262580 | orchestrator | 2026-02-23 20:41:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:41.300187 | orchestrator | 2026-02-23 20:41:41 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:41.302541 | orchestrator | 2026-02-23 20:41:41 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:41.309186 | orchestrator | 2026-02-23 20:41:41 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:41.309276 | orchestrator | 2026-02-23 20:41:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:44.353485 | orchestrator | 2026-02-23 20:41:44 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:44.355196 | orchestrator | 2026-02-23 20:41:44 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:44.356432 | orchestrator | 2026-02-23 20:41:44 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:44.356567 | orchestrator | 2026-02-23 20:41:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:47.406368 | orchestrator | 2026-02-23 20:41:47 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:47.409441 | orchestrator | 2026-02-23 20:41:47 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:47.410707 | orchestrator | 2026-02-23 20:41:47 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:47.410974 | orchestrator | 2026-02-23 20:41:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:50.454785 | orchestrator | 2026-02-23 20:41:50 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:50.456303 | orchestrator | 2026-02-23 20:41:50 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:50.457968 | orchestrator | 2026-02-23 20:41:50 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:50.458010 | orchestrator | 2026-02-23 20:41:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:53.503618 | orchestrator | 2026-02-23 20:41:53 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:53.504964 | orchestrator | 2026-02-23 20:41:53 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:53.506517 | orchestrator | 2026-02-23 20:41:53 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:53.506557 | orchestrator | 2026-02-23 20:41:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:56.551469 | orchestrator | 2026-02-23 20:41:56 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:56.553385 | orchestrator | 2026-02-23 20:41:56 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:56.555233 | orchestrator | 2026-02-23 20:41:56 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:56.555274 | orchestrator | 2026-02-23 20:41:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:41:59.594894 | orchestrator | 2026-02-23 20:41:59 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:41:59.595602 | orchestrator | 2026-02-23 20:41:59 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:41:59.596485 | orchestrator | 2026-02-23 20:41:59 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:41:59.596505 | orchestrator | 2026-02-23 20:41:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:02.632124 | orchestrator | 2026-02-23 20:42:02 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:42:02.635522 | orchestrator | 2026-02-23 20:42:02 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:42:02.637906 | orchestrator | 2026-02-23 20:42:02 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:02.637941 | orchestrator | 2026-02-23 20:42:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:05.682219 | orchestrator | 2026-02-23 20:42:05 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:42:05.684746 | orchestrator | 2026-02-23 20:42:05 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:42:05.686117 | orchestrator | 2026-02-23 20:42:05 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:05.686280 | orchestrator | 2026-02-23 20:42:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:08.736141 | orchestrator | 2026-02-23 20:42:08 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:42:08.738090 | orchestrator | 2026-02-23 20:42:08 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:42:08.740189 | orchestrator | 2026-02-23 20:42:08 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:08.740232 | orchestrator | 2026-02-23 20:42:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:11.774684 | orchestrator | 2026-02-23 20:42:11 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:42:11.775566 | orchestrator | 2026-02-23 20:42:11 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:42:11.776948 | orchestrator | 2026-02-23 20:42:11 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:11.776981 | orchestrator | 2026-02-23 20:42:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:14.817207 | orchestrator | 2026-02-23 20:42:14 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state STARTED 2026-02-23 20:42:14.819107 | orchestrator | 2026-02-23 20:42:14 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state STARTED 2026-02-23 20:42:14.821435 | orchestrator | 2026-02-23 20:42:14 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:14.821493 | orchestrator | 2026-02-23 20:42:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:17.868030 | orchestrator | 2026-02-23 20:42:17.868079 | orchestrator | 2026-02-23 20:42:17.868085 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:42:17.868089 | orchestrator | 2026-02-23 20:42:17.868093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:42:17.868098 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.192) 0:00:00.192 ******* 2026-02-23 20:42:17.868102 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.868122 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.868126 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.868130 | orchestrator | 2026-02-23 20:42:17.868134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:42:17.868138 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.221) 0:00:00.413 ******* 2026-02-23 20:42:17.868155 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-23 20:42:17.868159 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-23 20:42:17.868163 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-23 20:42:17.868167 | orchestrator | 2026-02-23 20:42:17.868171 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-23 20:42:17.868175 | orchestrator | 2026-02-23 20:42:17.868179 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-23 20:42:17.868182 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.301) 0:00:00.715 ******* 2026-02-23 20:42:17.868187 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:42:17.868191 | orchestrator | 2026-02-23 20:42:17.868195 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-23 20:42:17.868198 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.426) 0:00:01.141 ******* 2026-02-23 20:42:17.868213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.868229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.868239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.868244 | orchestrator | 2026-02-23 20:42:17.868248 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-23 20:42:17.868251 | orchestrator | Monday 23 February 2026 20:40:45 +0000 (0:00:00.986) 0:00:02.128 ******* 2026-02-23 20:42:17.868255 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.868259 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.868263 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.868266 | orchestrator | 2026-02-23 20:42:17.868270 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-23 20:42:17.868276 | orchestrator | Monday 23 February 2026 20:40:46 +0000 (0:00:00.371) 0:00:02.500 ******* 2026-02-23 20:42:17.868280 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-23 20:42:17.868287 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-23 20:42:17.868291 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-23 20:42:17.868295 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-23 20:42:17.868301 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-23 20:42:17.868307 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-23 20:42:17.868316 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-23 20:42:17.868324 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-23 20:42:17.868432 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-23 20:42:17.868470 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-23 20:42:17.868475 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-23 20:42:17.868578 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-23 20:42:17.868612 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-23 20:42:17.868617 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-23 20:42:17.868621 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-23 20:42:17.868624 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-23 20:42:17.868628 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-23 20:42:17.868632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-23 20:42:17.868636 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-23 20:42:17.868639 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-23 20:42:17.868643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-23 20:42:17.868647 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-23 20:42:17.868650 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-23 20:42:17.868654 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-23 20:42:17.868658 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-23 20:42:17.868663 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-23 20:42:17.868667 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-23 20:42:17.868675 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-23 20:42:17.868679 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-23 20:42:17.868682 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-23 20:42:17.868686 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-23 20:42:17.868695 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-23 20:42:17.868729 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-23 20:42:17.868735 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-23 20:42:17.868739 | orchestrator | 2026-02-23 20:42:17.868743 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.868747 | orchestrator | Monday 23 February 2026 20:40:47 +0000 (0:00:00.723) 0:00:03.223 ******* 2026-02-23 20:42:17.868750 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.868754 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.868758 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.868762 | orchestrator | 2026-02-23 20:42:17.868766 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.868772 | orchestrator | Monday 23 February 2026 20:40:47 +0000 (0:00:00.273) 0:00:03.497 ******* 2026-02-23 20:42:17.868779 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.868785 | orchestrator | 2026-02-23 20:42:17.868797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.868803 | orchestrator | Monday 23 February 2026 20:40:47 +0000 (0:00:00.123) 0:00:03.620 ******* 2026-02-23 20:42:17.868809 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.868815 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.868821 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.868828 | orchestrator | 2026-02-23 20:42:17.868834 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.868839 | orchestrator | Monday 23 February 2026 20:40:47 +0000 (0:00:00.381) 0:00:04.002 ******* 2026-02-23 20:42:17.868843 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.868847 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.868851 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.868854 | orchestrator | 2026-02-23 20:42:17.868858 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.868862 | orchestrator | Monday 23 February 2026 20:40:48 +0000 (0:00:00.300) 0:00:04.302 ******* 2026-02-23 20:42:17.868872 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.868894 | orchestrator | 2026-02-23 20:42:17.868899 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.868927 | orchestrator | Monday 23 February 2026 20:40:48 +0000 (0:00:00.126) 0:00:04.428 ******* 2026-02-23 20:42:17.868932 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.868936 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.868940 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.868943 | orchestrator | 2026-02-23 20:42:17.868947 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.868951 | orchestrator | Monday 23 February 2026 20:40:48 +0000 (0:00:00.261) 0:00:04.690 ******* 2026-02-23 20:42:17.868955 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.868959 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.868972 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.868976 | orchestrator | 2026-02-23 20:42:17.868980 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.868984 | orchestrator | Monday 23 February 2026 20:40:48 +0000 (0:00:00.267) 0:00:04.958 ******* 2026-02-23 20:42:17.868988 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.868992 | orchestrator | 2026-02-23 20:42:17.868995 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.868999 | orchestrator | Monday 23 February 2026 20:40:49 +0000 (0:00:00.227) 0:00:05.185 ******* 2026-02-23 20:42:17.869037 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869054 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.869350 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.869367 | orchestrator | 2026-02-23 20:42:17.869371 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.869376 | orchestrator | Monday 23 February 2026 20:40:49 +0000 (0:00:00.257) 0:00:05.443 ******* 2026-02-23 20:42:17.869380 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.869385 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.869389 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.869393 | orchestrator | 2026-02-23 20:42:17.869397 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.869402 | orchestrator | Monday 23 February 2026 20:40:49 +0000 (0:00:00.303) 0:00:05.746 ******* 2026-02-23 20:42:17.869411 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869447 | orchestrator | 2026-02-23 20:42:17.869452 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.869457 | orchestrator | Monday 23 February 2026 20:40:49 +0000 (0:00:00.116) 0:00:05.863 ******* 2026-02-23 20:42:17.869461 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869465 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.869470 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.869474 | orchestrator | 2026-02-23 20:42:17.869492 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.869500 | orchestrator | Monday 23 February 2026 20:40:49 +0000 (0:00:00.295) 0:00:06.158 ******* 2026-02-23 20:42:17.869504 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.869507 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.869511 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.869515 | orchestrator | 2026-02-23 20:42:17.869519 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.869523 | orchestrator | Monday 23 February 2026 20:40:50 +0000 (0:00:00.647) 0:00:06.806 ******* 2026-02-23 20:42:17.869526 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869530 | orchestrator | 2026-02-23 20:42:17.869534 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.869537 | orchestrator | Monday 23 February 2026 20:40:50 +0000 (0:00:00.127) 0:00:06.934 ******* 2026-02-23 20:42:17.869541 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869545 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.869549 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.869552 | orchestrator | 2026-02-23 20:42:17.869556 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.869560 | orchestrator | Monday 23 February 2026 20:40:51 +0000 (0:00:00.339) 0:00:07.273 ******* 2026-02-23 20:42:17.869564 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.869567 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.869571 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.869575 | orchestrator | 2026-02-23 20:42:17.869579 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.869582 | orchestrator | Monday 23 February 2026 20:40:51 +0000 (0:00:00.339) 0:00:07.613 ******* 2026-02-23 20:42:17.869586 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869590 | orchestrator | 2026-02-23 20:42:17.869594 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.869597 | orchestrator | Monday 23 February 2026 20:40:51 +0000 (0:00:00.132) 0:00:07.745 ******* 2026-02-23 20:42:17.869601 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869822 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.869828 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.869911 | orchestrator | 2026-02-23 20:42:17.869915 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.869958 | orchestrator | Monday 23 February 2026 20:40:51 +0000 (0:00:00.300) 0:00:08.046 ******* 2026-02-23 20:42:17.869964 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.869974 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.869978 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.869982 | orchestrator | 2026-02-23 20:42:17.869986 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.869989 | orchestrator | Monday 23 February 2026 20:40:52 +0000 (0:00:00.550) 0:00:08.597 ******* 2026-02-23 20:42:17.869993 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.869997 | orchestrator | 2026-02-23 20:42:17.870001 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.870004 | orchestrator | Monday 23 February 2026 20:40:52 +0000 (0:00:00.132) 0:00:08.729 ******* 2026-02-23 20:42:17.870008 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870053 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870059 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870063 | orchestrator | 2026-02-23 20:42:17.870067 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.870070 | orchestrator | Monday 23 February 2026 20:40:52 +0000 (0:00:00.292) 0:00:09.021 ******* 2026-02-23 20:42:17.870074 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.870078 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.870082 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.870085 | orchestrator | 2026-02-23 20:42:17.870089 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.870093 | orchestrator | Monday 23 February 2026 20:40:53 +0000 (0:00:00.320) 0:00:09.341 ******* 2026-02-23 20:42:17.870097 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870100 | orchestrator | 2026-02-23 20:42:17.870104 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.870108 | orchestrator | Monday 23 February 2026 20:40:53 +0000 (0:00:00.142) 0:00:09.483 ******* 2026-02-23 20:42:17.870112 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870116 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870119 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870123 | orchestrator | 2026-02-23 20:42:17.870127 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.870131 | orchestrator | Monday 23 February 2026 20:40:53 +0000 (0:00:00.454) 0:00:09.938 ******* 2026-02-23 20:42:17.870135 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.870138 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.870142 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.870146 | orchestrator | 2026-02-23 20:42:17.870149 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.870153 | orchestrator | Monday 23 February 2026 20:40:54 +0000 (0:00:00.301) 0:00:10.239 ******* 2026-02-23 20:42:17.870157 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870161 | orchestrator | 2026-02-23 20:42:17.870164 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.870168 | orchestrator | Monday 23 February 2026 20:40:54 +0000 (0:00:00.145) 0:00:10.385 ******* 2026-02-23 20:42:17.870173 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870180 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870186 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870192 | orchestrator | 2026-02-23 20:42:17.870198 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-23 20:42:17.870205 | orchestrator | Monday 23 February 2026 20:40:54 +0000 (0:00:00.290) 0:00:10.676 ******* 2026-02-23 20:42:17.870211 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:42:17.870226 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:42:17.870232 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:42:17.870238 | orchestrator | 2026-02-23 20:42:17.870245 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-23 20:42:17.870251 | orchestrator | Monday 23 February 2026 20:40:54 +0000 (0:00:00.338) 0:00:11.014 ******* 2026-02-23 20:42:17.870257 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870262 | orchestrator | 2026-02-23 20:42:17.870275 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-23 20:42:17.870290 | orchestrator | Monday 23 February 2026 20:40:54 +0000 (0:00:00.122) 0:00:11.137 ******* 2026-02-23 20:42:17.870295 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870299 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870302 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870306 | orchestrator | 2026-02-23 20:42:17.870310 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-23 20:42:17.870314 | orchestrator | Monday 23 February 2026 20:40:55 +0000 (0:00:00.549) 0:00:11.687 ******* 2026-02-23 20:42:17.870317 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:42:17.870321 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:42:17.870325 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:42:17.870329 | orchestrator | 2026-02-23 20:42:17.870333 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-23 20:42:17.870336 | orchestrator | Monday 23 February 2026 20:40:57 +0000 (0:00:01.700) 0:00:13.387 ******* 2026-02-23 20:42:17.870340 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-23 20:42:17.870344 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-23 20:42:17.870348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-23 20:42:17.870352 | orchestrator | 2026-02-23 20:42:17.870356 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-23 20:42:17.870360 | orchestrator | Monday 23 February 2026 20:40:59 +0000 (0:00:01.913) 0:00:15.301 ******* 2026-02-23 20:42:17.870363 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-23 20:42:17.870368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-23 20:42:17.870372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-23 20:42:17.870375 | orchestrator | 2026-02-23 20:42:17.870379 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-23 20:42:17.870404 | orchestrator | Monday 23 February 2026 20:41:01 +0000 (0:00:02.553) 0:00:17.854 ******* 2026-02-23 20:42:17.870409 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-23 20:42:17.870413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-23 20:42:17.870416 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-23 20:42:17.870420 | orchestrator | 2026-02-23 20:42:17.870424 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-23 20:42:17.870428 | orchestrator | Monday 23 February 2026 20:41:03 +0000 (0:00:02.214) 0:00:20.068 ******* 2026-02-23 20:42:17.870431 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870435 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870439 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870443 | orchestrator | 2026-02-23 20:42:17.870446 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-23 20:42:17.870450 | orchestrator | Monday 23 February 2026 20:41:04 +0000 (0:00:00.326) 0:00:20.394 ******* 2026-02-23 20:42:17.870454 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870458 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870461 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870465 | orchestrator | 2026-02-23 20:42:17.870469 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-23 20:42:17.870472 | orchestrator | Monday 23 February 2026 20:41:04 +0000 (0:00:00.279) 0:00:20.674 ******* 2026-02-23 20:42:17.870476 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:42:17.870484 | orchestrator | 2026-02-23 20:42:17.870487 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-23 20:42:17.870491 | orchestrator | Monday 23 February 2026 20:41:05 +0000 (0:00:00.810) 0:00:21.484 ******* 2026-02-23 20:42:17.870499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.870517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.870527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.870531 | orchestrator | 2026-02-23 20:42:17.870535 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-23 20:42:17.870539 | orchestrator | Monday 23 February 2026 20:41:06 +0000 (0:00:01.429) 0:00:22.913 ******* 2026-02-23 20:42:17.870554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:42:17.870562 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challeng2026-02-23 20:42:17 | INFO  | Task 8fc5dfba-6c29-4222-bce2-b6a46b47f059 is in state SUCCESS 2026-02-23 20:42:17.870583 | orchestrator | 2026-02-23 20:42:17 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:17.870587 | orchestrator | e/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:42:17.870591 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:42:17.870604 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870608 | orchestrator | 2026-02-23 20:42:17.870615 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-23 20:42:17.870619 | orchestrator | Monday 23 February 2026 20:41:07 +0000 (0:00:00.659) 0:00:23.572 ******* 2026-02-23 20:42:17.870635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:42:17.870643 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:42:17.870659 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-23 20:42:17.870703 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870741 | orchestrator | 2026-02-23 20:42:17.870748 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-23 20:42:17.870754 | orchestrator | Monday 23 February 2026 20:41:08 +0000 (0:00:00.814) 0:00:24.387 ******* 2026-02-23 20:42:17.870766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.870794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.870809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'yes', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-23 20:42:17.870813 | orchestrator | 2026-02-23 20:42:17.870817 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-23 20:42:17.870821 | orchestrator | Monday 23 February 2026 20:41:09 +0000 (0:00:01.651) 0:00:26.038 ******* 2026-02-23 20:42:17.870825 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:42:17.870828 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:42:17.870832 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:42:17.870836 | orchestrator | 2026-02-23 20:42:17.870840 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-23 20:42:17.870858 | orchestrator | Monday 23 February 2026 20:41:10 +0000 (0:00:00.357) 0:00:26.396 ******* 2026-02-23 20:42:17.870863 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:42:17.870871 | orchestrator | 2026-02-23 20:42:17.870875 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-23 20:42:17.870879 | orchestrator | Monday 23 February 2026 20:41:10 +0000 (0:00:00.514) 0:00:26.910 ******* 2026-02-23 20:42:17.870882 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:42:17.870886 | orchestrator | 2026-02-23 20:42:17.870890 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-23 20:42:17.870894 | orchestrator | Monday 23 February 2026 20:41:13 +0000 (0:00:02.676) 0:00:29.587 ******* 2026-02-23 20:42:17.870897 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:42:17.870901 | orchestrator | 2026-02-23 20:42:17.870905 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-23 20:42:17.870908 | orchestrator | Monday 23 February 2026 20:41:15 +0000 (0:00:02.521) 0:00:32.108 ******* 2026-02-23 20:42:17.870912 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:42:17.870916 | orchestrator | 2026-02-23 20:42:17.870920 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-23 20:42:17.870923 | orchestrator | Monday 23 February 2026 20:41:33 +0000 (0:00:17.448) 0:00:49.557 ******* 2026-02-23 20:42:17.870927 | orchestrator | 2026-02-23 20:42:17.870931 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-23 20:42:17.870934 | orchestrator | Monday 23 February 2026 20:41:33 +0000 (0:00:00.067) 0:00:49.624 ******* 2026-02-23 20:42:17.870938 | orchestrator | 2026-02-23 20:42:17.870942 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-23 20:42:17.870946 | orchestrator | Monday 23 February 2026 20:41:33 +0000 (0:00:00.064) 0:00:49.689 ******* 2026-02-23 20:42:17.870949 | orchestrator | 2026-02-23 20:42:17.870953 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-23 20:42:17.870957 | orchestrator | Monday 23 February 2026 20:41:33 +0000 (0:00:00.066) 0:00:49.755 ******* 2026-02-23 20:42:17.870960 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:42:17.870964 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:42:17.871002 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:42:17.871007 | orchestrator | 2026-02-23 20:42:17.871011 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:42:17.871016 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-23 20:42:17.871020 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-23 20:42:17.871024 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-23 20:42:17.871032 | orchestrator | 2026-02-23 20:42:17.871036 | orchestrator | 2026-02-23 20:42:17.871040 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:42:17.871043 | orchestrator | Monday 23 February 2026 20:42:16 +0000 (0:00:42.665) 0:01:32.421 ******* 2026-02-23 20:42:17.871047 | orchestrator | =============================================================================== 2026-02-23 20:42:17.871051 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.67s 2026-02-23 20:42:17.871054 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.45s 2026-02-23 20:42:17.871058 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.68s 2026-02-23 20:42:17.871064 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.55s 2026-02-23 20:42:17.871068 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.52s 2026-02-23 20:42:17.871072 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.21s 2026-02-23 20:42:17.871075 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.91s 2026-02-23 20:42:17.871082 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.70s 2026-02-23 20:42:17.871086 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.65s 2026-02-23 20:42:17.871090 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.43s 2026-02-23 20:42:17.871093 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.99s 2026-02-23 20:42:17.871097 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2026-02-23 20:42:17.871101 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-02-23 20:42:17.871104 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-02-23 20:42:17.871108 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-02-23 20:42:17.871112 | orchestrator | horizon : Update policy file name --------------------------------------- 0.65s 2026-02-23 20:42:17.871116 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-02-23 20:42:17.871119 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-02-23 20:42:17.871123 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-02-23 20:42:17.871127 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-02-23 20:42:17.871130 | orchestrator | 2026-02-23 20:42:17 | INFO  | Task 13c40dec-6648-456f-b5d2-83a6dd87231e is in state SUCCESS 2026-02-23 20:42:17.871134 | orchestrator | 2026-02-23 20:42:17.871152 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-23 20:42:17.871156 | orchestrator | 2.16.14 2026-02-23 20:42:17.871160 | orchestrator | 2026-02-23 20:42:17.871164 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-23 20:42:17.871167 | orchestrator | 2026-02-23 20:42:17.871171 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-23 20:42:17.871175 | orchestrator | Monday 23 February 2026 20:40:04 +0000 (0:00:00.587) 0:00:00.587 ******* 2026-02-23 20:42:17.871179 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:42:17.871182 | orchestrator | 2026-02-23 20:42:17.871186 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-23 20:42:17.871190 | orchestrator | Monday 23 February 2026 20:40:04 +0000 (0:00:00.595) 0:00:01.183 ******* 2026-02-23 20:42:17.871194 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871197 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871201 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871205 | orchestrator | 2026-02-23 20:42:17.871208 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-23 20:42:17.871212 | orchestrator | Monday 23 February 2026 20:40:05 +0000 (0:00:00.631) 0:00:01.815 ******* 2026-02-23 20:42:17.871216 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871220 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871223 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871227 | orchestrator | 2026-02-23 20:42:17.871231 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-23 20:42:17.871234 | orchestrator | Monday 23 February 2026 20:40:05 +0000 (0:00:00.318) 0:00:02.133 ******* 2026-02-23 20:42:17.871238 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871242 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871245 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871249 | orchestrator | 2026-02-23 20:42:17.871253 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-23 20:42:17.871257 | orchestrator | Monday 23 February 2026 20:40:06 +0000 (0:00:00.731) 0:00:02.865 ******* 2026-02-23 20:42:17.871260 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871264 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871269 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871275 | orchestrator | 2026-02-23 20:42:17.871288 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-23 20:42:17.871296 | orchestrator | Monday 23 February 2026 20:40:06 +0000 (0:00:00.290) 0:00:03.156 ******* 2026-02-23 20:42:17.871303 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871309 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871315 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871321 | orchestrator | 2026-02-23 20:42:17.871327 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-23 20:42:17.871333 | orchestrator | Monday 23 February 2026 20:40:07 +0000 (0:00:00.286) 0:00:03.442 ******* 2026-02-23 20:42:17.871338 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871343 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871349 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871355 | orchestrator | 2026-02-23 20:42:17.871361 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-23 20:42:17.871366 | orchestrator | Monday 23 February 2026 20:40:07 +0000 (0:00:00.308) 0:00:03.751 ******* 2026-02-23 20:42:17.871372 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871379 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.871385 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.871399 | orchestrator | 2026-02-23 20:42:17.871405 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-23 20:42:17.871411 | orchestrator | Monday 23 February 2026 20:40:07 +0000 (0:00:00.477) 0:00:04.228 ******* 2026-02-23 20:42:17.871417 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871433 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871441 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871447 | orchestrator | 2026-02-23 20:42:17.871453 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-23 20:42:17.871463 | orchestrator | Monday 23 February 2026 20:40:08 +0000 (0:00:00.294) 0:00:04.523 ******* 2026-02-23 20:42:17.871469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:42:17.871475 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:42:17.871481 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:42:17.871487 | orchestrator | 2026-02-23 20:42:17.871494 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-23 20:42:17.871500 | orchestrator | Monday 23 February 2026 20:40:08 +0000 (0:00:00.648) 0:00:05.172 ******* 2026-02-23 20:42:17.871506 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871511 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871515 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871519 | orchestrator | 2026-02-23 20:42:17.871522 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-23 20:42:17.871526 | orchestrator | Monday 23 February 2026 20:40:09 +0000 (0:00:00.435) 0:00:05.607 ******* 2026-02-23 20:42:17.871530 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:42:17.871534 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:42:17.871537 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:42:17.871541 | orchestrator | 2026-02-23 20:42:17.871545 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-23 20:42:17.871549 | orchestrator | Monday 23 February 2026 20:40:11 +0000 (0:00:02.171) 0:00:07.779 ******* 2026-02-23 20:42:17.871552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-23 20:42:17.871556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-23 20:42:17.871560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-23 20:42:17.871564 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871567 | orchestrator | 2026-02-23 20:42:17.871571 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-23 20:42:17.871599 | orchestrator | Monday 23 February 2026 20:40:12 +0000 (0:00:00.614) 0:00:08.394 ******* 2026-02-23 20:42:17.871606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.871612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.871619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.871625 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871632 | orchestrator | 2026-02-23 20:42:17.871639 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-23 20:42:17.871646 | orchestrator | Monday 23 February 2026 20:40:12 +0000 (0:00:00.770) 0:00:09.164 ******* 2026-02-23 20:42:17.871654 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.871663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.871667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.871671 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871674 | orchestrator | 2026-02-23 20:42:17.871678 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-23 20:42:17.871682 | orchestrator | Monday 23 February 2026 20:40:13 +0000 (0:00:00.346) 0:00:09.511 ******* 2026-02-23 20:42:17.871689 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd4278161478e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-23 20:40:10.076016', 'end': '2026-02-23 20:40:10.114022', 'delta': '0:00:00.038006', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4278161478e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-23 20:42:17.871694 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '38346c98a66b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-23 20:40:10.814538', 'end': '2026-02-23 20:40:10.845697', 'delta': '0:00:00.031159', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38346c98a66b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-23 20:42:17.871743 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fce2d54ca435', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-23 20:40:11.366065', 'end': '2026-02-23 20:40:11.391594', 'delta': '0:00:00.025529', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fce2d54ca435'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-23 20:42:17.871749 | orchestrator | 2026-02-23 20:42:17.871753 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-23 20:42:17.871757 | orchestrator | Monday 23 February 2026 20:40:13 +0000 (0:00:00.196) 0:00:09.707 ******* 2026-02-23 20:42:17.871761 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871764 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.871768 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.871772 | orchestrator | 2026-02-23 20:42:17.871776 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-23 20:42:17.871780 | orchestrator | Monday 23 February 2026 20:40:13 +0000 (0:00:00.436) 0:00:10.144 ******* 2026-02-23 20:42:17.871783 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-23 20:42:17.871787 | orchestrator | 2026-02-23 20:42:17.871791 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-23 20:42:17.871794 | orchestrator | Monday 23 February 2026 20:40:15 +0000 (0:00:01.580) 0:00:11.724 ******* 2026-02-23 20:42:17.871798 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871802 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.871806 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.871810 | orchestrator | 2026-02-23 20:42:17.871813 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-23 20:42:17.871817 | orchestrator | Monday 23 February 2026 20:40:15 +0000 (0:00:00.291) 0:00:12.016 ******* 2026-02-23 20:42:17.871821 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871825 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.871828 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.871834 | orchestrator | 2026-02-23 20:42:17.871840 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-23 20:42:17.871846 | orchestrator | Monday 23 February 2026 20:40:16 +0000 (0:00:00.361) 0:00:12.378 ******* 2026-02-23 20:42:17.871852 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871886 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.871892 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.871895 | orchestrator | 2026-02-23 20:42:17.871899 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-23 20:42:17.871903 | orchestrator | Monday 23 February 2026 20:40:16 +0000 (0:00:00.388) 0:00:12.766 ******* 2026-02-23 20:42:17.871907 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.871912 | orchestrator | 2026-02-23 20:42:17.871919 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-23 20:42:17.871924 | orchestrator | Monday 23 February 2026 20:40:16 +0000 (0:00:00.097) 0:00:12.864 ******* 2026-02-23 20:42:17.871928 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871932 | orchestrator | 2026-02-23 20:42:17.871936 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-23 20:42:17.871943 | orchestrator | Monday 23 February 2026 20:40:16 +0000 (0:00:00.206) 0:00:13.070 ******* 2026-02-23 20:42:17.871947 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871951 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.871957 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.871961 | orchestrator | 2026-02-23 20:42:17.871965 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-23 20:42:17.871969 | orchestrator | Monday 23 February 2026 20:40:17 +0000 (0:00:00.257) 0:00:13.328 ******* 2026-02-23 20:42:17.871972 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.871976 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.871980 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872013 | orchestrator | 2026-02-23 20:42:17.872018 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-23 20:42:17.872022 | orchestrator | Monday 23 February 2026 20:40:17 +0000 (0:00:00.287) 0:00:13.615 ******* 2026-02-23 20:42:17.872025 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872029 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872033 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872037 | orchestrator | 2026-02-23 20:42:17.872041 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-23 20:42:17.872044 | orchestrator | Monday 23 February 2026 20:40:17 +0000 (0:00:00.399) 0:00:14.015 ******* 2026-02-23 20:42:17.872048 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872052 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872070 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872074 | orchestrator | 2026-02-23 20:42:17.872078 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-23 20:42:17.872082 | orchestrator | Monday 23 February 2026 20:40:18 +0000 (0:00:00.300) 0:00:14.315 ******* 2026-02-23 20:42:17.872086 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872089 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872093 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872097 | orchestrator | 2026-02-23 20:42:17.872101 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-23 20:42:17.872104 | orchestrator | Monday 23 February 2026 20:40:18 +0000 (0:00:00.262) 0:00:14.578 ******* 2026-02-23 20:42:17.872108 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872113 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872119 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872125 | orchestrator | 2026-02-23 20:42:17.872153 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-23 20:42:17.872161 | orchestrator | Monday 23 February 2026 20:40:18 +0000 (0:00:00.300) 0:00:14.878 ******* 2026-02-23 20:42:17.872167 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872172 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872178 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872184 | orchestrator | 2026-02-23 20:42:17.872190 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-23 20:42:17.872196 | orchestrator | Monday 23 February 2026 20:40:19 +0000 (0:00:00.391) 0:00:15.270 ******* 2026-02-23 20:42:17.872203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a', 'dm-uuid-LVM-Nkdbq1LawE0ReTPXUhLEG2R6QcqUR8xkbZwCH11QO4HjCHQ5LicCUX5XTJzN8kYs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219', 'dm-uuid-LVM-YvzUYwl1JcgvAVAxZPXLVxr4EUsrX9IXRdgt6Zmazq2UbzYqEvTBDrHdTHFQrMcI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc', 'dm-uuid-LVM-tNTgI5saESAg1nCaqlR9MKL12ZN7k9vkHltVmwqAKhddPI6QQrGkT5rHExewoTVN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWooiD-b2sR-6z2q-VQbq-mprP-2EvV-5aCXVl', 'scsi-0QEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da', 'scsi-SQEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0', 'dm-uuid-LVM-WbJ6jQDFuG0eiw2AvPnFKwfTGKyQ1HsOQRbFUmDP0Q2qMZ3x45u1GgjgnrCRetLP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rP690M-DyCY-S28R-pawf-vdDl-Z4lr-xzSgcm', 'scsi-0QEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4', 'scsi-SQEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9', 'scsi-SQEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872359 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3', 'dm-uuid-LVM-Q2vb73DEBCSwr8JaoWS0rafAX3qiDw9sRjd7guIeqHPd9UbUJ7MgMqoZxEcSOT30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3', 'dm-uuid-LVM-3i5Fx08Tjohflg5YMo1Pt9tGnn1Rd0joU0KqPjJX2RWrTukQnGeSg2Gldy81ePsb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUu12U-JDUP-t2xn-uCMW-K73I-fPdl-uxTrzn', 'scsi-0QEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654', 'scsi-SQEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qkizZk-oM4M-RBqR-pMYN-1wz2-ne3V-Umx5TF', 'scsi-0QEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21', 'scsi-SQEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a', 'scsi-SQEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872518 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-23 20:42:17.872543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tO5Pdb-Nt1e-J0M6-cyyf-vfOr-lT2b-22Z1Ke', 'scsi-0QEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163', 'scsi-SQEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7HvTuf-uFA6-YHez-MQb3-c5bY-QZgC-UWVWGV', 'scsi-0QEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33', 'scsi-SQEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0', 'scsi-SQEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-23 20:42:17.872576 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872580 | orchestrator | 2026-02-23 20:42:17.872584 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-23 20:42:17.872588 | orchestrator | Monday 23 February 2026 20:40:19 +0000 (0:00:00.476) 0:00:15.746 ******* 2026-02-23 20:42:17.872592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a', 'dm-uuid-LVM-Nkdbq1LawE0ReTPXUhLEG2R6QcqUR8xkbZwCH11QO4HjCHQ5LicCUX5XTJzN8kYs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872596 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219', 'dm-uuid-LVM-YvzUYwl1JcgvAVAxZPXLVxr4EUsrX9IXRdgt6Zmazq2UbzYqEvTBDrHdTHFQrMcI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872642 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc', 'dm-uuid-LVM-tNTgI5saESAg1nCaqlR9MKL12ZN7k9vkHltVmwqAKhddPI6QQrGkT5rHExewoTVN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_8040d0c1-88b9-4bf5-a9b8-5090efbb82ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0', 'dm-uuid-LVM-WbJ6jQDFuG0eiw2AvPnFKwfTGKyQ1HsOQRbFUmDP0Q2qMZ3x45u1GgjgnrCRetLP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--16360c2d--86c0--538a--b982--f32cf88f5f8a-osd--block--16360c2d--86c0--538a--b982--f32cf88f5f8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eWooiD-b2sR-6z2q-VQbq-mprP-2EvV-5aCXVl', 'scsi-0QEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da', 'scsi-SQEMU_QEMU_HARDDISK_bead3466-decd-4fa8-a04b-557b053b82da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fef89255--3917--5f7c--b809--8ef443377219-osd--block--fef89255--3917--5f7c--b809--8ef443377219'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rP690M-DyCY-S28R-pawf-vdDl-Z4lr-xzSgcm', 'scsi-0QEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4', 'scsi-SQEMU_QEMU_HARDDISK_11854c96-d8a1-4784-a235-d2862629dfe4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9', 'scsi-SQEMU_QEMU_HARDDISK_7c30197a-895f-4957-9949-9f1150308fa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872721 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872739 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.872743 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872761 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3', 'dm-uuid-LVM-Q2vb73DEBCSwr8JaoWS0rafAX3qiDw9sRjd7guIeqHPd9UbUJ7MgMqoZxEcSOT30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df12093-382b-4d3e-affc-8b6c9f04cec0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b14837c--f03f--563c--b8ac--393f544981fc-osd--block--2b14837c--f03f--563c--b8ac--393f544981fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUu12U-JDUP-t2xn-uCMW-K73I-fPdl-uxTrzn', 'scsi-0QEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654', 'scsi-SQEMU_QEMU_HARDDISK_5aac1e3f-8db2-4358-9586-7110a9e5b654'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3', 'dm-uuid-LVM-3i5Fx08Tjohflg5YMo1Pt9tGnn1Rd0joU0KqPjJX2RWrTukQnGeSg2Gldy81ePsb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--21252442--555c--5549--b537--6075952af6e0-osd--block--21252442--555c--5549--b537--6075952af6e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qkizZk-oM4M-RBqR-pMYN-1wz2-ne3V-Umx5TF', 'scsi-0QEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21', 'scsi-SQEMU_QEMU_HARDDISK_5253639c-fb89-4131-a977-1b9b70ff9a21'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a', 'scsi-SQEMU_QEMU_HARDDISK_ff5a095b-a008-4c91-9745-d5e81356257a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872868 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872873 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.872876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e3a894-d2c6-47f3-8ba9-1b6214637a5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--086e8658--baeb--56a9--865d--4af6c70c9ca3-osd--block--086e8658--baeb--56a9--865d--4af6c70c9ca3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tO5Pdb-Nt1e-J0M6-cyyf-vfOr-lT2b-22Z1Ke', 'scsi-0QEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163', 'scsi-SQEMU_QEMU_HARDDISK_3a7ae571-7eb7-4840-83c2-d00e4c8c1163'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--721c0c76--436b--5140--8464--e8c748d186e3-osd--block--721c0c76--436b--5140--8464--e8c748d186e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7HvTuf-uFA6-YHez-MQb3-c5bY-QZgC-UWVWGV', 'scsi-0QEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33', 'scsi-SQEMU_QEMU_HARDDISK_1bc90d63-4a85-4c90-b970-1ea304425c33'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0', 'scsi-SQEMU_QEMU_HARDDISK_33628d78-ee4f-4c3b-aa76-e2d4933b92b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-23-19-48-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-23 20:42:17.872928 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.872932 | orchestrator | 2026-02-23 20:42:17.872936 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-23 20:42:17.872940 | orchestrator | Monday 23 February 2026 20:40:20 +0000 (0:00:00.580) 0:00:16.327 ******* 2026-02-23 20:42:17.872944 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.872948 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.872951 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.872955 | orchestrator | 2026-02-23 20:42:17.872959 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-23 20:42:17.872963 | orchestrator | Monday 23 February 2026 20:40:20 +0000 (0:00:00.661) 0:00:16.988 ******* 2026-02-23 20:42:17.872966 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.872970 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.872974 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.872978 | orchestrator | 2026-02-23 20:42:17.873004 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-23 20:42:17.873008 | orchestrator | Monday 23 February 2026 20:40:21 +0000 (0:00:00.537) 0:00:17.525 ******* 2026-02-23 20:42:17.873012 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.873016 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.873020 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.873024 | orchestrator | 2026-02-23 20:42:17.873027 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-23 20:42:17.873031 | orchestrator | Monday 23 February 2026 20:40:23 +0000 (0:00:01.752) 0:00:19.278 ******* 2026-02-23 20:42:17.873035 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873039 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873043 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873046 | orchestrator | 2026-02-23 20:42:17.873050 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-23 20:42:17.873054 | orchestrator | Monday 23 February 2026 20:40:23 +0000 (0:00:00.297) 0:00:19.575 ******* 2026-02-23 20:42:17.873058 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873064 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873068 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873072 | orchestrator | 2026-02-23 20:42:17.873075 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-23 20:42:17.873079 | orchestrator | Monday 23 February 2026 20:40:23 +0000 (0:00:00.406) 0:00:19.981 ******* 2026-02-23 20:42:17.873083 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873087 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873090 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873094 | orchestrator | 2026-02-23 20:42:17.873098 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-23 20:42:17.873102 | orchestrator | Monday 23 February 2026 20:40:24 +0000 (0:00:00.497) 0:00:20.478 ******* 2026-02-23 20:42:17.873105 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-23 20:42:17.873109 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-23 20:42:17.873113 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-23 20:42:17.873117 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-23 20:42:17.873120 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-23 20:42:17.873124 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-23 20:42:17.873128 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-23 20:42:17.873131 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-23 20:42:17.873138 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-23 20:42:17.873145 | orchestrator | 2026-02-23 20:42:17.873151 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-23 20:42:17.873157 | orchestrator | Monday 23 February 2026 20:40:25 +0000 (0:00:00.833) 0:00:21.312 ******* 2026-02-23 20:42:17.873163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-23 20:42:17.873167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-23 20:42:17.873171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-23 20:42:17.873175 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873178 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-23 20:42:17.873182 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-23 20:42:17.873219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-23 20:42:17.873224 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-23 20:42:17.873231 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-23 20:42:17.873235 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-23 20:42:17.873239 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873259 | orchestrator | 2026-02-23 20:42:17.873263 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-23 20:42:17.873267 | orchestrator | Monday 23 February 2026 20:40:25 +0000 (0:00:00.397) 0:00:21.709 ******* 2026-02-23 20:42:17.873271 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:42:17.873275 | orchestrator | 2026-02-23 20:42:17.873279 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-23 20:42:17.873284 | orchestrator | Monday 23 February 2026 20:40:26 +0000 (0:00:00.696) 0:00:22.406 ******* 2026-02-23 20:42:17.873298 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873303 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873310 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873314 | orchestrator | 2026-02-23 20:42:17.873318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-23 20:42:17.873322 | orchestrator | Monday 23 February 2026 20:40:26 +0000 (0:00:00.323) 0:00:22.729 ******* 2026-02-23 20:42:17.873326 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873333 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873336 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873342 | orchestrator | 2026-02-23 20:42:17.873348 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-23 20:42:17.873354 | orchestrator | Monday 23 February 2026 20:40:26 +0000 (0:00:00.317) 0:00:23.047 ******* 2026-02-23 20:42:17.873359 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873364 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873370 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:42:17.873376 | orchestrator | 2026-02-23 20:42:17.873382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-23 20:42:17.873387 | orchestrator | Monday 23 February 2026 20:40:27 +0000 (0:00:00.301) 0:00:23.348 ******* 2026-02-23 20:42:17.873393 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.873399 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.873406 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.873412 | orchestrator | 2026-02-23 20:42:17.873418 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-23 20:42:17.873423 | orchestrator | Monday 23 February 2026 20:40:27 +0000 (0:00:00.670) 0:00:24.019 ******* 2026-02-23 20:42:17.873430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:42:17.873436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:42:17.873442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:42:17.873446 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873450 | orchestrator | 2026-02-23 20:42:17.873455 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-23 20:42:17.873461 | orchestrator | Monday 23 February 2026 20:40:28 +0000 (0:00:00.384) 0:00:24.403 ******* 2026-02-23 20:42:17.873467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:42:17.873476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:42:17.873483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:42:17.873489 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873495 | orchestrator | 2026-02-23 20:42:17.873501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-23 20:42:17.873507 | orchestrator | Monday 23 February 2026 20:40:28 +0000 (0:00:00.416) 0:00:24.820 ******* 2026-02-23 20:42:17.873513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-23 20:42:17.873519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-23 20:42:17.873525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-23 20:42:17.873532 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873538 | orchestrator | 2026-02-23 20:42:17.873544 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-23 20:42:17.873550 | orchestrator | Monday 23 February 2026 20:40:28 +0000 (0:00:00.393) 0:00:25.214 ******* 2026-02-23 20:42:17.873556 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:42:17.873562 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:42:17.873568 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:42:17.873575 | orchestrator | 2026-02-23 20:42:17.873581 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-23 20:42:17.873587 | orchestrator | Monday 23 February 2026 20:40:29 +0000 (0:00:00.418) 0:00:25.633 ******* 2026-02-23 20:42:17.873593 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-23 20:42:17.873600 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-23 20:42:17.873606 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-23 20:42:17.873612 | orchestrator | 2026-02-23 20:42:17.873623 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-23 20:42:17.873630 | orchestrator | Monday 23 February 2026 20:40:29 +0000 (0:00:00.523) 0:00:26.156 ******* 2026-02-23 20:42:17.873636 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:42:17.873648 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:42:17.873654 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:42:17.873661 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-23 20:42:17.873667 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-23 20:42:17.873673 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-23 20:42:17.873679 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-23 20:42:17.873686 | orchestrator | 2026-02-23 20:42:17.873693 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-23 20:42:17.873699 | orchestrator | Monday 23 February 2026 20:40:30 +0000 (0:00:01.044) 0:00:27.201 ******* 2026-02-23 20:42:17.873719 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-23 20:42:17.873724 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-23 20:42:17.873728 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-23 20:42:17.873732 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-23 20:42:17.873735 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-23 20:42:17.873739 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-23 20:42:17.873748 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-23 20:42:17.873752 | orchestrator | 2026-02-23 20:42:17.873757 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-23 20:42:17.873763 | orchestrator | Monday 23 February 2026 20:40:32 +0000 (0:00:01.958) 0:00:29.159 ******* 2026-02-23 20:42:17.873769 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:42:17.873775 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:42:17.873781 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-23 20:42:17.873787 | orchestrator | 2026-02-23 20:42:17.873793 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-23 20:42:17.873800 | orchestrator | Monday 23 February 2026 20:40:33 +0000 (0:00:00.364) 0:00:29.524 ******* 2026-02-23 20:42:17.873807 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:42:17.873815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:42:17.873821 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:42:17.873828 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:42:17.873835 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-23 20:42:17.873847 | orchestrator | 2026-02-23 20:42:17.873882 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-23 20:42:17.873886 | orchestrator | Monday 23 February 2026 20:41:18 +0000 (0:00:44.934) 0:01:14.459 ******* 2026-02-23 20:42:17.873890 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873894 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873897 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873901 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873916 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-23 20:42:17.873919 | orchestrator | 2026-02-23 20:42:17.873923 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-23 20:42:17.873927 | orchestrator | Monday 23 February 2026 20:41:44 +0000 (0:00:26.540) 0:01:40.999 ******* 2026-02-23 20:42:17.873931 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873935 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873938 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873942 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873946 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873949 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873953 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-23 20:42:17.873957 | orchestrator | 2026-02-23 20:42:17.873961 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-23 20:42:17.873965 | orchestrator | Monday 23 February 2026 20:41:57 +0000 (0:00:12.310) 0:01:53.310 ******* 2026-02-23 20:42:17.873968 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873972 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:42:17.873976 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:42:17.873980 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873984 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:42:17.873991 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:42:17.873995 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.873998 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:42:17.874002 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:42:17.874006 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.874010 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:42:17.874065 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:42:17.874072 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.874079 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:42:17.874085 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:42:17.874091 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-23 20:42:17.874114 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-23 20:42:17.874122 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-23 20:42:17.874126 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-23 20:42:17.874130 | orchestrator | 2026-02-23 20:42:17.874134 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:42:17.874138 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-23 20:42:17.874142 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-23 20:42:17.874146 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-23 20:42:17.874150 | orchestrator | 2026-02-23 20:42:17.874154 | orchestrator | 2026-02-23 20:42:17.874158 | orchestrator | 2026-02-23 20:42:17.874161 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:42:17.874165 | orchestrator | Monday 23 February 2026 20:42:15 +0000 (0:00:18.408) 0:02:11.719 ******* 2026-02-23 20:42:17.874169 | orchestrator | =============================================================================== 2026-02-23 20:42:17.874173 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.94s 2026-02-23 20:42:17.874176 | orchestrator | generate keys ---------------------------------------------------------- 26.54s 2026-02-23 20:42:17.874180 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.41s 2026-02-23 20:42:17.874184 | orchestrator | get keys from monitors ------------------------------------------------- 12.31s 2026-02-23 20:42:17.874188 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.17s 2026-02-23 20:42:17.874192 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.96s 2026-02-23 20:42:17.874196 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.75s 2026-02-23 20:42:17.874199 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.58s 2026-02-23 20:42:17.874203 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.04s 2026-02-23 20:42:17.874211 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2026-02-23 20:42:17.874215 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2026-02-23 20:42:17.874219 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2026-02-23 20:42:17.874222 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-02-23 20:42:17.874226 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.67s 2026-02-23 20:42:17.874230 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-02-23 20:42:17.874234 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-02-23 20:42:17.874238 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2026-02-23 20:42:17.874241 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.61s 2026-02-23 20:42:17.874245 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2026-02-23 20:42:17.874249 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-02-23 20:42:17.874253 | orchestrator | 2026-02-23 20:42:17 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:17.874257 | orchestrator | 2026-02-23 20:42:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:20.924107 | orchestrator | 2026-02-23 20:42:20 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:20.925330 | orchestrator | 2026-02-23 20:42:20 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:20.925609 | orchestrator | 2026-02-23 20:42:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:23.970991 | orchestrator | 2026-02-23 20:42:23 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:23.972816 | orchestrator | 2026-02-23 20:42:23 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:23.973042 | orchestrator | 2026-02-23 20:42:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:27.020571 | orchestrator | 2026-02-23 20:42:27 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:27.021858 | orchestrator | 2026-02-23 20:42:27 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:27.021893 | orchestrator | 2026-02-23 20:42:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:30.065483 | orchestrator | 2026-02-23 20:42:30 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:30.066841 | orchestrator | 2026-02-23 20:42:30 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:30.066909 | orchestrator | 2026-02-23 20:42:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:33.116504 | orchestrator | 2026-02-23 20:42:33 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:33.116649 | orchestrator | 2026-02-23 20:42:33 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:33.116684 | orchestrator | 2026-02-23 20:42:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:36.162825 | orchestrator | 2026-02-23 20:42:36 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:36.165460 | orchestrator | 2026-02-23 20:42:36 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:36.165781 | orchestrator | 2026-02-23 20:42:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:39.219037 | orchestrator | 2026-02-23 20:42:39 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:39.220253 | orchestrator | 2026-02-23 20:42:39 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:39.220301 | orchestrator | 2026-02-23 20:42:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:42.268962 | orchestrator | 2026-02-23 20:42:42 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:42.270871 | orchestrator | 2026-02-23 20:42:42 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:42.270959 | orchestrator | 2026-02-23 20:42:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:45.319930 | orchestrator | 2026-02-23 20:42:45 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:45.320421 | orchestrator | 2026-02-23 20:42:45 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:45.320987 | orchestrator | 2026-02-23 20:42:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:48.361668 | orchestrator | 2026-02-23 20:42:48 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:48.364168 | orchestrator | 2026-02-23 20:42:48 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:48.364217 | orchestrator | 2026-02-23 20:42:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:51.412424 | orchestrator | 2026-02-23 20:42:51 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state STARTED 2026-02-23 20:42:51.413263 | orchestrator | 2026-02-23 20:42:51 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:51.413309 | orchestrator | 2026-02-23 20:42:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:54.459990 | orchestrator | 2026-02-23 20:42:54 | INFO  | Task 8c31e807-97eb-4e6f-94b3-bdaf5022b65e is in state SUCCESS 2026-02-23 20:42:54.463489 | orchestrator | 2026-02-23 20:42:54 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:54.463537 | orchestrator | 2026-02-23 20:42:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:42:57.511801 | orchestrator | 2026-02-23 20:42:57 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:42:57.514227 | orchestrator | 2026-02-23 20:42:57 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:42:57.514716 | orchestrator | 2026-02-23 20:42:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:00.558360 | orchestrator | 2026-02-23 20:43:00 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:00.560122 | orchestrator | 2026-02-23 20:43:00 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:00.560250 | orchestrator | 2026-02-23 20:43:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:03.609219 | orchestrator | 2026-02-23 20:43:03 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:03.610358 | orchestrator | 2026-02-23 20:43:03 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:03.610805 | orchestrator | 2026-02-23 20:43:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:06.648054 | orchestrator | 2026-02-23 20:43:06 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:06.649389 | orchestrator | 2026-02-23 20:43:06 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:06.649496 | orchestrator | 2026-02-23 20:43:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:09.697125 | orchestrator | 2026-02-23 20:43:09 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:09.699966 | orchestrator | 2026-02-23 20:43:09 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:09.700010 | orchestrator | 2026-02-23 20:43:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:12.752004 | orchestrator | 2026-02-23 20:43:12 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:12.754150 | orchestrator | 2026-02-23 20:43:12 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:12.754789 | orchestrator | 2026-02-23 20:43:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:15.803905 | orchestrator | 2026-02-23 20:43:15 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:15.805638 | orchestrator | 2026-02-23 20:43:15 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:15.805686 | orchestrator | 2026-02-23 20:43:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:18.845079 | orchestrator | 2026-02-23 20:43:18 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:18.847353 | orchestrator | 2026-02-23 20:43:18 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:18.847410 | orchestrator | 2026-02-23 20:43:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:21.881224 | orchestrator | 2026-02-23 20:43:21 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:21.881584 | orchestrator | 2026-02-23 20:43:21 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state STARTED 2026-02-23 20:43:21.881613 | orchestrator | 2026-02-23 20:43:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:24.934475 | orchestrator | 2026-02-23 20:43:24 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:24.935113 | orchestrator | 2026-02-23 20:43:24 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:24.935670 | orchestrator | 2026-02-23 20:43:24 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:24.936265 | orchestrator | 2026-02-23 20:43:24 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:24.937120 | orchestrator | 2026-02-23 20:43:24 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:24.938923 | orchestrator | 2026-02-23 20:43:24 | INFO  | Task 11f1d7db-6ef3-4daf-aa26-4c6a7af0d8f6 is in state SUCCESS 2026-02-23 20:43:24.940466 | orchestrator | 2026-02-23 20:43:24.940567 | orchestrator | 2026-02-23 20:43:24.940575 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-23 20:43:24.940583 | orchestrator | 2026-02-23 20:43:24.940589 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-23 20:43:24.940596 | orchestrator | Monday 23 February 2026 20:42:20 +0000 (0:00:00.154) 0:00:00.154 ******* 2026-02-23 20:43:24.940602 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-23 20:43:24.940610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.940616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.940621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:43:24.940627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.940673 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-23 20:43:24.940679 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-23 20:43:24.940685 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-23 20:43:24.940691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-23 20:43:24.940697 | orchestrator | 2026-02-23 20:43:24.940730 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-23 20:43:24.940737 | orchestrator | Monday 23 February 2026 20:42:24 +0000 (0:00:04.318) 0:00:04.472 ******* 2026-02-23 20:43:24.940785 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-23 20:43:24.940792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.940798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.940804 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:43:24.940810 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.940816 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-23 20:43:24.940822 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-23 20:43:24.940848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-23 20:43:24.941000 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-23 20:43:24.941011 | orchestrator | 2026-02-23 20:43:24.941017 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-23 20:43:24.941022 | orchestrator | Monday 23 February 2026 20:42:28 +0000 (0:00:04.094) 0:00:08.566 ******* 2026-02-23 20:43:24.941028 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-23 20:43:24.941035 | orchestrator | 2026-02-23 20:43:24.941040 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-23 20:43:24.941046 | orchestrator | Monday 23 February 2026 20:42:29 +0000 (0:00:01.071) 0:00:09.638 ******* 2026-02-23 20:43:24.941051 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-23 20:43:24.941057 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.941063 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.941068 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:43:24.941074 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.941080 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-23 20:43:24.941085 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-23 20:43:24.941091 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-23 20:43:24.941096 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-23 20:43:24.941103 | orchestrator | 2026-02-23 20:43:24.941122 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-23 20:43:24.941128 | orchestrator | Monday 23 February 2026 20:42:43 +0000 (0:00:13.781) 0:00:23.420 ******* 2026-02-23 20:43:24.941133 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-23 20:43:24.941138 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-23 20:43:24.941145 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-23 20:43:24.941151 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-23 20:43:24.941168 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-23 20:43:24.941174 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-23 20:43:24.941179 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-23 20:43:24.941185 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-23 20:43:24.941190 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-23 20:43:24.941196 | orchestrator | 2026-02-23 20:43:24.941202 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-23 20:43:24.941208 | orchestrator | Monday 23 February 2026 20:42:46 +0000 (0:00:03.088) 0:00:26.508 ******* 2026-02-23 20:43:24.941214 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-23 20:43:24.941220 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.941227 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.941233 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:43:24.941248 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-23 20:43:24.941256 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-23 20:43:24.941259 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-23 20:43:24.941263 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-23 20:43:24.941267 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-23 20:43:24.941271 | orchestrator | 2026-02-23 20:43:24.941326 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:43:24.941331 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:43:24.941337 | orchestrator | 2026-02-23 20:43:24.941341 | orchestrator | 2026-02-23 20:43:24.941345 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:43:24.941349 | orchestrator | Monday 23 February 2026 20:42:53 +0000 (0:00:06.881) 0:00:33.390 ******* 2026-02-23 20:43:24.941352 | orchestrator | =============================================================================== 2026-02-23 20:43:24.941356 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.78s 2026-02-23 20:43:24.941360 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.88s 2026-02-23 20:43:24.941364 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.32s 2026-02-23 20:43:24.941368 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.09s 2026-02-23 20:43:24.941372 | orchestrator | Check if target directories exist --------------------------------------- 3.09s 2026-02-23 20:43:24.941375 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-02-23 20:43:24.941380 | orchestrator | 2026-02-23 20:43:24.941384 | orchestrator | 2026-02-23 20:43:24.941388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:43:24.941391 | orchestrator | 2026-02-23 20:43:24.941397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:43:24.941403 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.231) 0:00:00.231 ******* 2026-02-23 20:43:24.941409 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:43:24.941415 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:43:24.941421 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:43:24.941427 | orchestrator | 2026-02-23 20:43:24.941433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:43:24.941439 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.259) 0:00:00.491 ******* 2026-02-23 20:43:24.941444 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-23 20:43:24.941450 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-23 20:43:24.941457 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-23 20:43:24.941634 | orchestrator | 2026-02-23 20:43:24.941641 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-23 20:43:24.941645 | orchestrator | 2026-02-23 20:43:24.941650 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-23 20:43:24.941653 | orchestrator | Monday 23 February 2026 20:40:44 +0000 (0:00:00.331) 0:00:00.823 ******* 2026-02-23 20:43:24.941658 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:43:24.941662 | orchestrator | 2026-02-23 20:43:24.941666 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-23 20:43:24.941676 | orchestrator | Monday 23 February 2026 20:40:45 +0000 (0:00:00.518) 0:00:01.342 ******* 2026-02-23 20:43:24.941705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.941722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.941728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.941733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941780 | orchestrator | 2026-02-23 20:43:24.941784 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-23 20:43:24.941788 | orchestrator | Monday 23 February 2026 20:40:46 +0000 (0:00:01.482) 0:00:02.824 ******* 2026-02-23 20:43:24.941792 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.941796 | orchestrator | 2026-02-23 20:43:24.941800 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-23 20:43:24.941804 | orchestrator | Monday 23 February 2026 20:40:46 +0000 (0:00:00.118) 0:00:02.942 ******* 2026-02-23 20:43:24.941807 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.941811 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.941815 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.941819 | orchestrator | 2026-02-23 20:43:24.941823 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-23 20:43:24.941826 | orchestrator | Monday 23 February 2026 20:40:47 +0000 (0:00:00.345) 0:00:03.288 ******* 2026-02-23 20:43:24.941830 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:43:24.941834 | orchestrator | 2026-02-23 20:43:24.941838 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-23 20:43:24.941842 | orchestrator | Monday 23 February 2026 20:40:47 +0000 (0:00:00.773) 0:00:04.061 ******* 2026-02-23 20:43:24.941845 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:43:24.941852 | orchestrator | 2026-02-23 20:43:24.941856 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-23 20:43:24.941860 | orchestrator | Monday 23 February 2026 20:40:48 +0000 (0:00:00.455) 0:00:04.516 ******* 2026-02-23 20:43:24.941870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.941875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.941880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.941884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.941920 | orchestrator | 2026-02-23 20:43:24.941924 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-23 20:43:24.941928 | orchestrator | Monday 23 February 2026 20:40:51 +0000 (0:00:03.322) 0:00:07.839 ******* 2026-02-23 20:43:24.941932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.941940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.941946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.941950 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.941957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.941962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.941966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.941970 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.941974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.941987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.941996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942000 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942004 | orchestrator | 2026-02-23 20:43:24.942008 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-23 20:43:24.942045 | orchestrator | Monday 23 February 2026 20:40:52 +0000 (0:00:00.604) 0:00:08.444 ******* 2026-02-23 20:43:24.942052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.942056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942068 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.942084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942092 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.942104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942112 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942116 | orchestrator | 2026-02-23 20:43:24.942120 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-23 20:43:24.942127 | orchestrator | Monday 23 February 2026 20:40:53 +0000 (0:00:00.775) 0:00:09.220 ******* 2026-02-23 20:43:24.942135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942185 | orchestrator | 2026-02-23 20:43:24.942190 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-23 20:43:24.942193 | orchestrator | Monday 23 February 2026 20:40:56 +0000 (0:00:03.674) 0:00:12.894 ******* 2026-02-23 20:43:24.942198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942279 | orchestrator | 2026-02-23 20:43:24.942285 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-23 20:43:24.942290 | orchestrator | Monday 23 February 2026 20:41:02 +0000 (0:00:05.224) 0:00:18.119 ******* 2026-02-23 20:43:24.942296 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.942302 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:43:24.942313 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:43:24.942319 | orchestrator | 2026-02-23 20:43:24.942325 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-23 20:43:24.942331 | orchestrator | Monday 23 February 2026 20:41:03 +0000 (0:00:01.651) 0:00:19.771 ******* 2026-02-23 20:43:24.942336 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942342 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942348 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942354 | orchestrator | 2026-02-23 20:43:24.942360 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-23 20:43:24.942365 | orchestrator | Monday 23 February 2026 20:41:04 +0000 (0:00:00.531) 0:00:20.303 ******* 2026-02-23 20:43:24.942372 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942378 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942384 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942390 | orchestrator | 2026-02-23 20:43:24.942396 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-23 20:43:24.942403 | orchestrator | Monday 23 February 2026 20:41:04 +0000 (0:00:00.279) 0:00:20.582 ******* 2026-02-23 20:43:24.942409 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942415 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942422 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942428 | orchestrator | 2026-02-23 20:43:24.942434 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-23 20:43:24.942440 | orchestrator | Monday 23 February 2026 20:41:04 +0000 (0:00:00.477) 0:00:21.060 ******* 2026-02-23 20:43:24.942446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.942456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942475 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.942515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942527 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-23 20:43:24.942547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-23 20:43:24.942559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-23 20:43:24.942571 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942578 | orchestrator | 2026-02-23 20:43:24.942584 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-23 20:43:24.942589 | orchestrator | Monday 23 February 2026 20:41:05 +0000 (0:00:00.621) 0:00:21.681 ******* 2026-02-23 20:43:24.942593 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942598 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942602 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942607 | orchestrator | 2026-02-23 20:43:24.942611 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-23 20:43:24.942616 | orchestrator | Monday 23 February 2026 20:41:05 +0000 (0:00:00.304) 0:00:21.986 ******* 2026-02-23 20:43:24.942620 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-23 20:43:24.942625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-23 20:43:24.942629 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-23 20:43:24.942634 | orchestrator | 2026-02-23 20:43:24.942639 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-23 20:43:24.942643 | orchestrator | Monday 23 February 2026 20:41:07 +0000 (0:00:01.442) 0:00:23.428 ******* 2026-02-23 20:43:24.942648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:43:24.942652 | orchestrator | 2026-02-23 20:43:24.942656 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-23 20:43:24.942661 | orchestrator | Monday 23 February 2026 20:41:08 +0000 (0:00:00.922) 0:00:24.351 ******* 2026-02-23 20:43:24.942665 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942669 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942673 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942676 | orchestrator | 2026-02-23 20:43:24.942680 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-23 20:43:24.942684 | orchestrator | Monday 23 February 2026 20:41:09 +0000 (0:00:00.834) 0:00:25.185 ******* 2026-02-23 20:43:24.942688 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:43:24.942692 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-23 20:43:24.942696 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-23 20:43:24.942700 | orchestrator | 2026-02-23 20:43:24.942704 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-23 20:43:24.942707 | orchestrator | Monday 23 February 2026 20:41:10 +0000 (0:00:01.180) 0:00:26.366 ******* 2026-02-23 20:43:24.942711 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:43:24.942716 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:43:24.942720 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:43:24.942724 | orchestrator | 2026-02-23 20:43:24.942728 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-23 20:43:24.942732 | orchestrator | Monday 23 February 2026 20:41:10 +0000 (0:00:00.337) 0:00:26.703 ******* 2026-02-23 20:43:24.942736 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-23 20:43:24.942740 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-23 20:43:24.942744 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-23 20:43:24.942748 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-23 20:43:24.942755 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-23 20:43:24.942759 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-23 20:43:24.942763 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-23 20:43:24.942767 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-23 20:43:24.942773 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-23 20:43:24.942777 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-23 20:43:24.942781 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-23 20:43:24.942785 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-23 20:43:24.942789 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-23 20:43:24.942793 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-23 20:43:24.942800 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-23 20:43:24.942804 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-23 20:43:24.942808 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-23 20:43:24.942812 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-23 20:43:24.942816 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-23 20:43:24.942820 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-23 20:43:24.942824 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-23 20:43:24.942827 | orchestrator | 2026-02-23 20:43:24.942831 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-23 20:43:24.942835 | orchestrator | Monday 23 February 2026 20:41:19 +0000 (0:00:08.570) 0:00:35.274 ******* 2026-02-23 20:43:24.942839 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-23 20:43:24.942843 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-23 20:43:24.942846 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-23 20:43:24.942850 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-23 20:43:24.942855 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-23 20:43:24.942858 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-23 20:43:24.942862 | orchestrator | 2026-02-23 20:43:24.942866 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-23 20:43:24.942870 | orchestrator | Monday 23 February 2026 20:41:21 +0000 (0:00:02.590) 0:00:37.864 ******* 2026-02-23 20:43:24.942874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-23 20:43:24.942897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-23 20:43:24.942927 | orchestrator | 2026-02-23 20:43:24.942935 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-23 20:43:24.942939 | orchestrator | Monday 23 February 2026 20:41:24 +0000 (0:00:02.451) 0:00:40.315 ******* 2026-02-23 20:43:24.942943 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.942947 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.942951 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.942955 | orchestrator | 2026-02-23 20:43:24.942959 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-23 20:43:24.942962 | orchestrator | Monday 23 February 2026 20:41:24 +0000 (0:00:00.290) 0:00:40.605 ******* 2026-02-23 20:43:24.942966 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.942970 | orchestrator | 2026-02-23 20:43:24.942974 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-23 20:43:24.942978 | orchestrator | Monday 23 February 2026 20:41:26 +0000 (0:00:02.458) 0:00:43.063 ******* 2026-02-23 20:43:24.942982 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.942986 | orchestrator | 2026-02-23 20:43:24.942990 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-23 20:43:24.942994 | orchestrator | Monday 23 February 2026 20:41:29 +0000 (0:00:02.559) 0:00:45.623 ******* 2026-02-23 20:43:24.942998 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:43:24.943001 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:43:24.943005 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:43:24.943009 | orchestrator | 2026-02-23 20:43:24.943013 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-23 20:43:24.943017 | orchestrator | Monday 23 February 2026 20:41:30 +0000 (0:00:01.110) 0:00:46.734 ******* 2026-02-23 20:43:24.943024 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:43:24.943028 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:43:24.943033 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:43:24.943037 | orchestrator | 2026-02-23 20:43:24.943041 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-23 20:43:24.943045 | orchestrator | Monday 23 February 2026 20:41:30 +0000 (0:00:00.291) 0:00:47.025 ******* 2026-02-23 20:43:24.943049 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.943053 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.943057 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.943061 | orchestrator | 2026-02-23 20:43:24.943065 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-23 20:43:24.943069 | orchestrator | Monday 23 February 2026 20:41:31 +0000 (0:00:00.323) 0:00:47.349 ******* 2026-02-23 20:43:24.943073 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.943077 | orchestrator | 2026-02-23 20:43:24.943081 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-23 20:43:24.943086 | orchestrator | Monday 23 February 2026 20:41:46 +0000 (0:00:15.569) 0:01:02.919 ******* 2026-02-23 20:43:24.943089 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.943093 | orchestrator | 2026-02-23 20:43:24.943097 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-23 20:43:24.943101 | orchestrator | Monday 23 February 2026 20:41:58 +0000 (0:00:11.789) 0:01:14.708 ******* 2026-02-23 20:43:24.943105 | orchestrator | 2026-02-23 20:43:24.943109 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-23 20:43:24.943113 | orchestrator | Monday 23 February 2026 20:41:58 +0000 (0:00:00.065) 0:01:14.773 ******* 2026-02-23 20:43:24.943116 | orchestrator | 2026-02-23 20:43:24.943120 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-23 20:43:24.943124 | orchestrator | Monday 23 February 2026 20:41:58 +0000 (0:00:00.064) 0:01:14.838 ******* 2026-02-23 20:43:24.943128 | orchestrator | 2026-02-23 20:43:24.943132 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-23 20:43:24.943136 | orchestrator | Monday 23 February 2026 20:41:58 +0000 (0:00:00.066) 0:01:14.904 ******* 2026-02-23 20:43:24.943140 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.943143 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:43:24.943147 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:43:24.943151 | orchestrator | 2026-02-23 20:43:24.943155 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-23 20:43:24.943159 | orchestrator | Monday 23 February 2026 20:42:09 +0000 (0:00:10.279) 0:01:25.184 ******* 2026-02-23 20:43:24.943163 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:43:24.943166 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:43:24.943170 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.943174 | orchestrator | 2026-02-23 20:43:24.943178 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-23 20:43:24.943182 | orchestrator | Monday 23 February 2026 20:42:16 +0000 (0:00:07.666) 0:01:32.850 ******* 2026-02-23 20:43:24.943186 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:43:24.943190 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.943194 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:43:24.943198 | orchestrator | 2026-02-23 20:43:24.943202 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-23 20:43:24.943206 | orchestrator | Monday 23 February 2026 20:42:26 +0000 (0:00:10.117) 0:01:42.968 ******* 2026-02-23 20:43:24.943209 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:43:24.943213 | orchestrator | 2026-02-23 20:43:24.943219 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-23 20:43:24.943223 | orchestrator | Monday 23 February 2026 20:42:27 +0000 (0:00:00.777) 0:01:43.745 ******* 2026-02-23 20:43:24.943227 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:43:24.943235 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:43:24.943239 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:43:24.943243 | orchestrator | 2026-02-23 20:43:24.943247 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-23 20:43:24.943251 | orchestrator | Monday 23 February 2026 20:42:28 +0000 (0:00:00.715) 0:01:44.460 ******* 2026-02-23 20:43:24.943254 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:43:24.943266 | orchestrator | 2026-02-23 20:43:24.943271 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-23 20:43:24.943275 | orchestrator | Monday 23 February 2026 20:42:29 +0000 (0:00:01.572) 0:01:46.033 ******* 2026-02-23 20:43:24.943282 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-23 20:43:24.943287 | orchestrator | 2026-02-23 20:43:24.943291 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-23 20:43:24.943295 | orchestrator | Monday 23 February 2026 20:42:41 +0000 (0:00:11.816) 0:01:57.850 ******* 2026-02-23 20:43:24.943299 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-23 20:43:24.943302 | orchestrator | 2026-02-23 20:43:24.943306 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-23 20:43:24.943310 | orchestrator | Monday 23 February 2026 20:43:11 +0000 (0:00:29.303) 0:02:27.154 ******* 2026-02-23 20:43:24.943314 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-23 20:43:24.943318 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-23 20:43:24.943321 | orchestrator | 2026-02-23 20:43:24.943325 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-23 20:43:24.943329 | orchestrator | Monday 23 February 2026 20:43:17 +0000 (0:00:06.627) 0:02:33.782 ******* 2026-02-23 20:43:24.943333 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.943336 | orchestrator | 2026-02-23 20:43:24.943340 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-23 20:43:24.943344 | orchestrator | Monday 23 February 2026 20:43:17 +0000 (0:00:00.096) 0:02:33.878 ******* 2026-02-23 20:43:24.943348 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.943352 | orchestrator | 2026-02-23 20:43:24.943356 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-23 20:43:24.943360 | orchestrator | Monday 23 February 2026 20:43:17 +0000 (0:00:00.113) 0:02:33.992 ******* 2026-02-23 20:43:24.943364 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.943368 | orchestrator | 2026-02-23 20:43:24.943371 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-23 20:43:24.943375 | orchestrator | Monday 23 February 2026 20:43:18 +0000 (0:00:00.112) 0:02:34.105 ******* 2026-02-23 20:43:24.943379 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.943383 | orchestrator | 2026-02-23 20:43:24.943387 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-23 20:43:24.943391 | orchestrator | Monday 23 February 2026 20:43:18 +0000 (0:00:00.407) 0:02:34.512 ******* 2026-02-23 20:43:24.943395 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:43:24.943399 | orchestrator | 2026-02-23 20:43:24.943402 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-23 20:43:24.943406 | orchestrator | Monday 23 February 2026 20:43:22 +0000 (0:00:03.584) 0:02:38.096 ******* 2026-02-23 20:43:24.943410 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:43:24.943414 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:43:24.943418 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:43:24.943422 | orchestrator | 2026-02-23 20:43:24.943426 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:43:24.943430 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-23 20:43:24.943436 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-23 20:43:24.943443 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-23 20:43:24.943447 | orchestrator | 2026-02-23 20:43:24.943451 | orchestrator | 2026-02-23 20:43:24.943455 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:43:24.943458 | orchestrator | Monday 23 February 2026 20:43:22 +0000 (0:00:00.407) 0:02:38.504 ******* 2026-02-23 20:43:24.943462 | orchestrator | =============================================================================== 2026-02-23 20:43:24.943466 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.30s 2026-02-23 20:43:24.943470 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.57s 2026-02-23 20:43:24.943474 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.82s 2026-02-23 20:43:24.943498 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.79s 2026-02-23 20:43:24.943505 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.28s 2026-02-23 20:43:24.943511 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.12s 2026-02-23 20:43:24.943518 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.57s 2026-02-23 20:43:24.943524 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.67s 2026-02-23 20:43:24.943531 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.63s 2026-02-23 20:43:24.943540 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.22s 2026-02-23 20:43:24.943548 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.67s 2026-02-23 20:43:24.943552 | orchestrator | keystone : Creating default user role ----------------------------------- 3.58s 2026-02-23 20:43:24.943556 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.32s 2026-02-23 20:43:24.943560 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.59s 2026-02-23 20:43:24.943564 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.56s 2026-02-23 20:43:24.943567 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.46s 2026-02-23 20:43:24.943572 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.45s 2026-02-23 20:43:24.943578 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.65s 2026-02-23 20:43:24.943583 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.57s 2026-02-23 20:43:24.943587 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.48s 2026-02-23 20:43:24.943590 | orchestrator | 2026-02-23 20:43:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:27.966641 | orchestrator | 2026-02-23 20:43:27 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:27.966732 | orchestrator | 2026-02-23 20:43:27 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:27.967238 | orchestrator | 2026-02-23 20:43:27 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:27.967648 | orchestrator | 2026-02-23 20:43:27 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:27.968054 | orchestrator | 2026-02-23 20:43:27 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:27.968082 | orchestrator | 2026-02-23 20:43:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:31.004324 | orchestrator | 2026-02-23 20:43:31 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:31.005079 | orchestrator | 2026-02-23 20:43:31 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:31.005882 | orchestrator | 2026-02-23 20:43:31 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:31.006748 | orchestrator | 2026-02-23 20:43:31 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:31.007748 | orchestrator | 2026-02-23 20:43:31 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:31.007773 | orchestrator | 2026-02-23 20:43:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:34.053682 | orchestrator | 2026-02-23 20:43:34 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:34.055906 | orchestrator | 2026-02-23 20:43:34 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:34.057617 | orchestrator | 2026-02-23 20:43:34 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:34.059609 | orchestrator | 2026-02-23 20:43:34 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:34.060875 | orchestrator | 2026-02-23 20:43:34 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:34.060925 | orchestrator | 2026-02-23 20:43:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:37.103263 | orchestrator | 2026-02-23 20:43:37 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:37.109379 | orchestrator | 2026-02-23 20:43:37 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:37.111908 | orchestrator | 2026-02-23 20:43:37 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:37.114477 | orchestrator | 2026-02-23 20:43:37 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:37.116781 | orchestrator | 2026-02-23 20:43:37 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:37.116923 | orchestrator | 2026-02-23 20:43:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:40.160618 | orchestrator | 2026-02-23 20:43:40 | INFO  | [1mTask e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:40.161493 | orchestrator | 2026-02-23 20:43:40 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:40.163390 | orchestrator | 2026-02-23 20:43:40 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:40.165795 | orchestrator | 2026-02-23 20:43:40 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:40.167418 | orchestrator | 2026-02-23 20:43:40 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:40.167864 | orchestrator | 2026-02-23 20:43:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:43.214285 | orchestrator | 2026-02-23 20:43:43 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:43.216033 | orchestrator | 2026-02-23 20:43:43 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:43.219562 | orchestrator | 2026-02-23 20:43:43 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:43.221111 | orchestrator | 2026-02-23 20:43:43 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:43.225301 | orchestrator | 2026-02-23 20:43:43 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:43.225404 | orchestrator | 2026-02-23 20:43:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:46.269640 | orchestrator | 2026-02-23 20:43:46 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:46.270864 | orchestrator | 2026-02-23 20:43:46 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:46.272825 | orchestrator | 2026-02-23 20:43:46 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:46.274522 | orchestrator | 2026-02-23 20:43:46 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:46.275744 | orchestrator | 2026-02-23 20:43:46 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:46.275793 | orchestrator | 2026-02-23 20:43:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:49.318376 | orchestrator | 2026-02-23 20:43:49 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:49.320085 | orchestrator | 2026-02-23 20:43:49 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:49.321880 | orchestrator | 2026-02-23 20:43:49 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:49.323173 | orchestrator | 2026-02-23 20:43:49 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:49.324485 | orchestrator | 2026-02-23 20:43:49 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:49.324522 | orchestrator | 2026-02-23 20:43:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:52.378277 | orchestrator | 2026-02-23 20:43:52 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:52.381491 | orchestrator | 2026-02-23 20:43:52 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:52.384179 | orchestrator | 2026-02-23 20:43:52 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:52.387382 | orchestrator | 2026-02-23 20:43:52 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:52.390529 | orchestrator | 2026-02-23 20:43:52 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state STARTED 2026-02-23 20:43:52.391139 | orchestrator | 2026-02-23 20:43:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:55.443319 | orchestrator | 2026-02-23 20:43:55 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:55.443652 | orchestrator | 2026-02-23 20:43:55 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:55.444916 | orchestrator | 2026-02-23 20:43:55 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:55.448068 | orchestrator | 2026-02-23 20:43:55 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:55.452582 | orchestrator | 2026-02-23 20:43:55 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:43:55.458879 | orchestrator | 2026-02-23 20:43:55 | INFO  | Task 4903c629-c431-48f5-b17f-4da0bbece4bc is in state SUCCESS 2026-02-23 20:43:55.458977 | orchestrator | 2026-02-23 20:43:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:43:58.502443 | orchestrator | 2026-02-23 20:43:58 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:43:58.502524 | orchestrator | 2026-02-23 20:43:58 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:43:58.503594 | orchestrator | 2026-02-23 20:43:58 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:43:58.505340 | orchestrator | 2026-02-23 20:43:58 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:43:58.506331 | orchestrator | 2026-02-23 20:43:58 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:43:58.506559 | orchestrator | 2026-02-23 20:43:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:01.548899 | orchestrator | 2026-02-23 20:44:01 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:01.551092 | orchestrator | 2026-02-23 20:44:01 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:01.553710 | orchestrator | 2026-02-23 20:44:01 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:01.555795 | orchestrator | 2026-02-23 20:44:01 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:01.560342 | orchestrator | 2026-02-23 20:44:01 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:01.560452 | orchestrator | 2026-02-23 20:44:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:04.602680 | orchestrator | 2026-02-23 20:44:04 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:04.602735 | orchestrator | 2026-02-23 20:44:04 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:04.602743 | orchestrator | 2026-02-23 20:44:04 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:04.602750 | orchestrator | 2026-02-23 20:44:04 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:04.603937 | orchestrator | 2026-02-23 20:44:04 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:04.603961 | orchestrator | 2026-02-23 20:44:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:07.627244 | orchestrator | 2026-02-23 20:44:07 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:07.627791 | orchestrator | 2026-02-23 20:44:07 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:07.628598 | orchestrator | 2026-02-23 20:44:07 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:07.630577 | orchestrator | 2026-02-23 20:44:07 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:07.631402 | orchestrator | 2026-02-23 20:44:07 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:07.631550 | orchestrator | 2026-02-23 20:44:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:10.666235 | orchestrator | 2026-02-23 20:44:10 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:10.666726 | orchestrator | 2026-02-23 20:44:10 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:10.667494 | orchestrator | 2026-02-23 20:44:10 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:10.668246 | orchestrator | 2026-02-23 20:44:10 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:10.669065 | orchestrator | 2026-02-23 20:44:10 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:10.669089 | orchestrator | 2026-02-23 20:44:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:13.699153 | orchestrator | 2026-02-23 20:44:13 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:13.700061 | orchestrator | 2026-02-23 20:44:13 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:13.700801 | orchestrator | 2026-02-23 20:44:13 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:13.701490 | orchestrator | 2026-02-23 20:44:13 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:13.702241 | orchestrator | 2026-02-23 20:44:13 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:13.702258 | orchestrator | 2026-02-23 20:44:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:16.730133 | orchestrator | 2026-02-23 20:44:16 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:16.731204 | orchestrator | 2026-02-23 20:44:16 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:16.731255 | orchestrator | 2026-02-23 20:44:16 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:16.732218 | orchestrator | 2026-02-23 20:44:16 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:16.732894 | orchestrator | 2026-02-23 20:44:16 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:16.732934 | orchestrator | 2026-02-23 20:44:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:19.757108 | orchestrator | 2026-02-23 20:44:19 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:19.757203 | orchestrator | 2026-02-23 20:44:19 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:19.757899 | orchestrator | 2026-02-23 20:44:19 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:19.759564 | orchestrator | 2026-02-23 20:44:19 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:19.760115 | orchestrator | 2026-02-23 20:44:19 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:19.760160 | orchestrator | 2026-02-23 20:44:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:22.788791 | orchestrator | 2026-02-23 20:44:22 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:22.789240 | orchestrator | 2026-02-23 20:44:22 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:22.789990 | orchestrator | 2026-02-23 20:44:22 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:22.790625 | orchestrator | 2026-02-23 20:44:22 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:22.791512 | orchestrator | 2026-02-23 20:44:22 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:22.791540 | orchestrator | 2026-02-23 20:44:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:25.813786 | orchestrator | 2026-02-23 20:44:25 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:25.814679 | orchestrator | 2026-02-23 20:44:25 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:25.815825 | orchestrator | 2026-02-23 20:44:25 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:25.817015 | orchestrator | 2026-02-23 20:44:25 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:25.817871 | orchestrator | 2026-02-23 20:44:25 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:25.817904 | orchestrator | 2026-02-23 20:44:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:28.841462 | orchestrator | 2026-02-23 20:44:28 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:28.841964 | orchestrator | 2026-02-23 20:44:28 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:28.842724 | orchestrator | 2026-02-23 20:44:28 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:28.844790 | orchestrator | 2026-02-23 20:44:28 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:28.845469 | orchestrator | 2026-02-23 20:44:28 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:28.845499 | orchestrator | 2026-02-23 20:44:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:31.869212 | orchestrator | 2026-02-23 20:44:31 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:31.869517 | orchestrator | 2026-02-23 20:44:31 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:31.870476 | orchestrator | 2026-02-23 20:44:31 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:31.871465 | orchestrator | 2026-02-23 20:44:31 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:31.872828 | orchestrator | 2026-02-23 20:44:31 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:31.872858 | orchestrator | 2026-02-23 20:44:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:34.891985 | orchestrator | 2026-02-23 20:44:34 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:34.892527 | orchestrator | 2026-02-23 20:44:34 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:34.893109 | orchestrator | 2026-02-23 20:44:34 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:34.894632 | orchestrator | 2026-02-23 20:44:34 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:34.895322 | orchestrator | 2026-02-23 20:44:34 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:34.895453 | orchestrator | 2026-02-23 20:44:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:37.922224 | orchestrator | 2026-02-23 20:44:37 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:37.922499 | orchestrator | 2026-02-23 20:44:37 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:37.923385 | orchestrator | 2026-02-23 20:44:37 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:37.924196 | orchestrator | 2026-02-23 20:44:37 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:37.924966 | orchestrator | 2026-02-23 20:44:37 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:37.925001 | orchestrator | 2026-02-23 20:44:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:40.949955 | orchestrator | 2026-02-23 20:44:40 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:40.950080 | orchestrator | 2026-02-23 20:44:40 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:40.950978 | orchestrator | 2026-02-23 20:44:40 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:40.951635 | orchestrator | 2026-02-23 20:44:40 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:40.953321 | orchestrator | 2026-02-23 20:44:40 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:40.953368 | orchestrator | 2026-02-23 20:44:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:43.975822 | orchestrator | 2026-02-23 20:44:43 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:43.976370 | orchestrator | 2026-02-23 20:44:43 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:43.977024 | orchestrator | 2026-02-23 20:44:43 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:43.977696 | orchestrator | 2026-02-23 20:44:43 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:43.979087 | orchestrator | 2026-02-23 20:44:43 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:43.979120 | orchestrator | 2026-02-23 20:44:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:47.000584 | orchestrator | 2026-02-23 20:44:46 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:47.001845 | orchestrator | 2026-02-23 20:44:47 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:47.003092 | orchestrator | 2026-02-23 20:44:47 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:47.004166 | orchestrator | 2026-02-23 20:44:47 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:47.004728 | orchestrator | 2026-02-23 20:44:47 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:47.004764 | orchestrator | 2026-02-23 20:44:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:50.033857 | orchestrator | 2026-02-23 20:44:50 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:50.034832 | orchestrator | 2026-02-23 20:44:50 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:50.034904 | orchestrator | 2026-02-23 20:44:50 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:50.035878 | orchestrator | 2026-02-23 20:44:50 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:50.036699 | orchestrator | 2026-02-23 20:44:50 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:50.036759 | orchestrator | 2026-02-23 20:44:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:53.063921 | orchestrator | 2026-02-23 20:44:53 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:53.064008 | orchestrator | 2026-02-23 20:44:53 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state STARTED 2026-02-23 20:44:53.064612 | orchestrator | 2026-02-23 20:44:53 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:53.065106 | orchestrator | 2026-02-23 20:44:53 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:53.066160 | orchestrator | 2026-02-23 20:44:53 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:53.066193 | orchestrator | 2026-02-23 20:44:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:56.111434 | orchestrator | 2026-02-23 20:44:56 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:44:56.111871 | orchestrator | 2026-02-23 20:44:56 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:56.112319 | orchestrator | 2026-02-23 20:44:56 | INFO  | Task cf1d1c9b-def6-4508-9a8b-485f97020742 is in state SUCCESS 2026-02-23 20:44:56.112708 | orchestrator | 2026-02-23 20:44:56.112738 | orchestrator | 2026-02-23 20:44:56.112746 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-23 20:44:56.112755 | orchestrator | 2026-02-23 20:44:56.112762 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-23 20:44:56.112767 | orchestrator | Monday 23 February 2026 20:42:58 +0000 (0:00:00.230) 0:00:00.230 ******* 2026-02-23 20:44:56.112771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-23 20:44:56.112776 | orchestrator | 2026-02-23 20:44:56.112780 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-23 20:44:56.112784 | orchestrator | Monday 23 February 2026 20:42:58 +0000 (0:00:00.218) 0:00:00.449 ******* 2026-02-23 20:44:56.112789 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-23 20:44:56.112793 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-23 20:44:56.112798 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-23 20:44:56.112802 | orchestrator | 2026-02-23 20:44:56.112806 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-23 20:44:56.112811 | orchestrator | Monday 23 February 2026 20:42:59 +0000 (0:00:01.286) 0:00:01.735 ******* 2026-02-23 20:44:56.112815 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-23 20:44:56.112819 | orchestrator | 2026-02-23 20:44:56.112823 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-23 20:44:56.112827 | orchestrator | Monday 23 February 2026 20:43:01 +0000 (0:00:01.448) 0:00:03.183 ******* 2026-02-23 20:44:56.112832 | orchestrator | changed: [testbed-manager] 2026-02-23 20:44:56.112836 | orchestrator | 2026-02-23 20:44:56.112840 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-23 20:44:56.112844 | orchestrator | Monday 23 February 2026 20:43:02 +0000 (0:00:00.925) 0:00:04.108 ******* 2026-02-23 20:44:56.112849 | orchestrator | changed: [testbed-manager] 2026-02-23 20:44:56.112853 | orchestrator | 2026-02-23 20:44:56.112857 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-23 20:44:56.112861 | orchestrator | Monday 23 February 2026 20:43:02 +0000 (0:00:00.898) 0:00:05.007 ******* 2026-02-23 20:44:56.112865 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-23 20:44:56.112869 | orchestrator | ok: [testbed-manager] 2026-02-23 20:44:56.112874 | orchestrator | 2026-02-23 20:44:56.112878 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-23 20:44:56.112882 | orchestrator | Monday 23 February 2026 20:43:44 +0000 (0:00:41.257) 0:00:46.265 ******* 2026-02-23 20:44:56.112886 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-23 20:44:56.112890 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-23 20:44:56.112895 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-23 20:44:56.112899 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-23 20:44:56.112903 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-23 20:44:56.112907 | orchestrator | 2026-02-23 20:44:56.112911 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-23 20:44:56.112915 | orchestrator | Monday 23 February 2026 20:43:48 +0000 (0:00:04.089) 0:00:50.354 ******* 2026-02-23 20:44:56.112919 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-23 20:44:56.112923 | orchestrator | 2026-02-23 20:44:56.112927 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-23 20:44:56.112931 | orchestrator | Monday 23 February 2026 20:43:48 +0000 (0:00:00.471) 0:00:50.825 ******* 2026-02-23 20:44:56.112935 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:44:56.112940 | orchestrator | 2026-02-23 20:44:56.112952 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-23 20:44:56.112957 | orchestrator | Monday 23 February 2026 20:43:48 +0000 (0:00:00.125) 0:00:50.951 ******* 2026-02-23 20:44:56.112961 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:44:56.112965 | orchestrator | 2026-02-23 20:44:56.112969 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-23 20:44:56.112980 | orchestrator | Monday 23 February 2026 20:43:49 +0000 (0:00:00.481) 0:00:51.432 ******* 2026-02-23 20:44:56.112985 | orchestrator | changed: [testbed-manager] 2026-02-23 20:44:56.112989 | orchestrator | 2026-02-23 20:44:56.112993 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-23 20:44:56.112998 | orchestrator | Monday 23 February 2026 20:43:50 +0000 (0:00:01.448) 0:00:52.881 ******* 2026-02-23 20:44:56.113002 | orchestrator | changed: [testbed-manager] 2026-02-23 20:44:56.113006 | orchestrator | 2026-02-23 20:44:56.113010 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-23 20:44:56.113014 | orchestrator | Monday 23 February 2026 20:43:51 +0000 (0:00:00.775) 0:00:53.657 ******* 2026-02-23 20:44:56.113018 | orchestrator | changed: [testbed-manager] 2026-02-23 20:44:56.113023 | orchestrator | 2026-02-23 20:44:56.113027 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-23 20:44:56.113031 | orchestrator | Monday 23 February 2026 20:43:52 +0000 (0:00:00.595) 0:00:54.252 ******* 2026-02-23 20:44:56.113035 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-23 20:44:56.113039 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-23 20:44:56.113043 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-23 20:44:56.113048 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-23 20:44:56.113052 | orchestrator | 2026-02-23 20:44:56.113056 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:44:56.113060 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 20:44:56.113065 | orchestrator | 2026-02-23 20:44:56.113069 | orchestrator | 2026-02-23 20:44:56.113080 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:44:56.113084 | orchestrator | Monday 23 February 2026 20:43:53 +0000 (0:00:01.527) 0:00:55.779 ******* 2026-02-23 20:44:56.113089 | orchestrator | =============================================================================== 2026-02-23 20:44:56.113093 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.26s 2026-02-23 20:44:56.113097 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.09s 2026-02-23 20:44:56.113101 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2026-02-23 20:44:56.113105 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.45s 2026-02-23 20:44:56.113109 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.45s 2026-02-23 20:44:56.113114 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2026-02-23 20:44:56.113118 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-02-23 20:44:56.113122 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2026-02-23 20:44:56.113126 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-02-23 20:44:56.113130 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-02-23 20:44:56.113134 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.48s 2026-02-23 20:44:56.113138 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-02-23 20:44:56.113143 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-02-23 20:44:56.113239 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-23 20:44:56.113245 | orchestrator | 2026-02-23 20:44:56.113250 | orchestrator | 2026-02-23 20:44:56.113259 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-23 20:44:56.113263 | orchestrator | 2026-02-23 20:44:56.113267 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-23 20:44:56.113271 | orchestrator | Monday 23 February 2026 20:43:27 +0000 (0:00:00.078) 0:00:00.078 ******* 2026-02-23 20:44:56.113275 | orchestrator | changed: [localhost] 2026-02-23 20:44:56.113280 | orchestrator | 2026-02-23 20:44:56.113284 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-23 20:44:56.113288 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.896) 0:00:00.974 ******* 2026-02-23 20:44:56.113292 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-02-23 20:44:56.113296 | orchestrator | changed: [localhost] 2026-02-23 20:44:56.113301 | orchestrator | 2026-02-23 20:44:56.113305 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-23 20:44:56.113309 | orchestrator | Monday 23 February 2026 20:44:23 +0000 (0:00:55.182) 0:00:56.157 ******* 2026-02-23 20:44:56.113314 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-02-23 20:44:56.113318 | orchestrator | changed: [localhost] 2026-02-23 20:44:56.113322 | orchestrator | 2026-02-23 20:44:56.113326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:44:56.113330 | orchestrator | 2026-02-23 20:44:56.113335 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:44:56.113339 | orchestrator | Monday 23 February 2026 20:44:51 +0000 (0:00:27.547) 0:01:23.704 ******* 2026-02-23 20:44:56.113343 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:44:56.113347 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:44:56.113351 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:44:56.113356 | orchestrator | 2026-02-23 20:44:56.113360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:44:56.113364 | orchestrator | Monday 23 February 2026 20:44:51 +0000 (0:00:00.599) 0:01:24.303 ******* 2026-02-23 20:44:56.113368 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-23 20:44:56.113372 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-23 20:44:56.113376 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-23 20:44:56.113381 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-23 20:44:56.113385 | orchestrator | 2026-02-23 20:44:56.113392 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-23 20:44:56.113396 | orchestrator | skipping: no hosts matched 2026-02-23 20:44:56.113401 | orchestrator | 2026-02-23 20:44:56.113405 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:44:56.113409 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:44:56.113414 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:44:56.113419 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:44:56.113423 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:44:56.113427 | orchestrator | 2026-02-23 20:44:56.113432 | orchestrator | 2026-02-23 20:44:56.113436 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:44:56.113440 | orchestrator | Monday 23 February 2026 20:44:52 +0000 (0:00:01.118) 0:01:25.423 ******* 2026-02-23 20:44:56.113444 | orchestrator | =============================================================================== 2026-02-23 20:44:56.113449 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 55.18s 2026-02-23 20:44:56.113453 | orchestrator | Download ironic-agent kernel ------------------------------------------- 27.55s 2026-02-23 20:44:56.113466 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2026-02-23 20:44:56.113471 | orchestrator | Ensure the destination directory exists --------------------------------- 0.90s 2026-02-23 20:44:56.113476 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-02-23 20:44:56.113480 | orchestrator | 2026-02-23 20:44:56 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:56.114050 | orchestrator | 2026-02-23 20:44:56 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:56.114944 | orchestrator | 2026-02-23 20:44:56 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:56.114966 | orchestrator | 2026-02-23 20:44:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:44:59.138441 | orchestrator | 2026-02-23 20:44:59 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:44:59.138939 | orchestrator | 2026-02-23 20:44:59 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:44:59.139575 | orchestrator | 2026-02-23 20:44:59 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:44:59.140194 | orchestrator | 2026-02-23 20:44:59 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:44:59.140931 | orchestrator | 2026-02-23 20:44:59 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:44:59.140957 | orchestrator | 2026-02-23 20:44:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:02.175253 | orchestrator | 2026-02-23 20:45:02 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:02.176764 | orchestrator | 2026-02-23 20:45:02 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:02.177388 | orchestrator | 2026-02-23 20:45:02 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:02.178195 | orchestrator | 2026-02-23 20:45:02 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:02.179850 | orchestrator | 2026-02-23 20:45:02 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:02.179885 | orchestrator | 2026-02-23 20:45:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:05.203573 | orchestrator | 2026-02-23 20:45:05 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:05.204129 | orchestrator | 2026-02-23 20:45:05 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:05.205012 | orchestrator | 2026-02-23 20:45:05 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:05.206053 | orchestrator | 2026-02-23 20:45:05 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:05.206865 | orchestrator | 2026-02-23 20:45:05 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:05.206888 | orchestrator | 2026-02-23 20:45:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:08.268705 | orchestrator | 2026-02-23 20:45:08 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:08.270226 | orchestrator | 2026-02-23 20:45:08 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:08.272462 | orchestrator | 2026-02-23 20:45:08 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:08.273997 | orchestrator | 2026-02-23 20:45:08 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:08.275818 | orchestrator | 2026-02-23 20:45:08 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:08.275882 | orchestrator | 2026-02-23 20:45:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:11.306523 | orchestrator | 2026-02-23 20:45:11 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:11.307109 | orchestrator | 2026-02-23 20:45:11 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:11.307912 | orchestrator | 2026-02-23 20:45:11 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:11.308528 | orchestrator | 2026-02-23 20:45:11 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:11.309321 | orchestrator | 2026-02-23 20:45:11 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:11.310728 | orchestrator | 2026-02-23 20:45:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:14.339356 | orchestrator | 2026-02-23 20:45:14 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:14.340797 | orchestrator | 2026-02-23 20:45:14 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:14.342271 | orchestrator | 2026-02-23 20:45:14 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:14.343552 | orchestrator | 2026-02-23 20:45:14 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:14.345650 | orchestrator | 2026-02-23 20:45:14 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:14.345801 | orchestrator | 2026-02-23 20:45:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:17.391022 | orchestrator | 2026-02-23 20:45:17 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:17.391539 | orchestrator | 2026-02-23 20:45:17 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:17.393361 | orchestrator | 2026-02-23 20:45:17 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:17.394117 | orchestrator | 2026-02-23 20:45:17 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:17.395718 | orchestrator | 2026-02-23 20:45:17 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:17.395749 | orchestrator | 2026-02-23 20:45:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:20.421106 | orchestrator | 2026-02-23 20:45:20 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:20.421338 | orchestrator | 2026-02-23 20:45:20 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:20.422686 | orchestrator | 2026-02-23 20:45:20 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:20.423950 | orchestrator | 2026-02-23 20:45:20 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:20.424718 | orchestrator | 2026-02-23 20:45:20 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:20.424746 | orchestrator | 2026-02-23 20:45:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:23.546746 | orchestrator | 2026-02-23 20:45:23 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:23.546974 | orchestrator | 2026-02-23 20:45:23 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:23.547656 | orchestrator | 2026-02-23 20:45:23 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:23.548278 | orchestrator | 2026-02-23 20:45:23 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:23.548909 | orchestrator | 2026-02-23 20:45:23 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:23.548931 | orchestrator | 2026-02-23 20:45:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:26.586471 | orchestrator | 2026-02-23 20:45:26 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:26.587621 | orchestrator | 2026-02-23 20:45:26 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:26.590533 | orchestrator | 2026-02-23 20:45:26 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:26.594833 | orchestrator | 2026-02-23 20:45:26 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:26.599972 | orchestrator | 2026-02-23 20:45:26 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:26.600015 | orchestrator | 2026-02-23 20:45:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:29.650865 | orchestrator | 2026-02-23 20:45:29 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:29.650974 | orchestrator | 2026-02-23 20:45:29 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:29.650986 | orchestrator | 2026-02-23 20:45:29 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state STARTED 2026-02-23 20:45:29.650993 | orchestrator | 2026-02-23 20:45:29 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:29.650999 | orchestrator | 2026-02-23 20:45:29 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state STARTED 2026-02-23 20:45:29.651006 | orchestrator | 2026-02-23 20:45:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:32.643647 | orchestrator | 2026-02-23 20:45:32 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:32.644290 | orchestrator | 2026-02-23 20:45:32 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:32.645745 | orchestrator | 2026-02-23 20:45:32 | INFO  | Task c9310d29-1011-45b2-97a0-f7e68ed78cd1 is in state SUCCESS 2026-02-23 20:45:32.647068 | orchestrator | 2026-02-23 20:45:32.647135 | orchestrator | 2026-02-23 20:45:32.647140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:45:32.647145 | orchestrator | 2026-02-23 20:45:32.647149 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:45:32.647154 | orchestrator | Monday 23 February 2026 20:43:27 +0000 (0:00:00.207) 0:00:00.207 ******* 2026-02-23 20:45:32.647158 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:45:32.647162 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:45:32.647166 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:45:32.647170 | orchestrator | 2026-02-23 20:45:32.647174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:45:32.647178 | orchestrator | Monday 23 February 2026 20:43:27 +0000 (0:00:00.231) 0:00:00.438 ******* 2026-02-23 20:45:32.647182 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-23 20:45:32.647186 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-23 20:45:32.647189 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-23 20:45:32.647193 | orchestrator | 2026-02-23 20:45:32.647197 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-23 20:45:32.647201 | orchestrator | 2026-02-23 20:45:32.647204 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-23 20:45:32.647219 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.364) 0:00:00.803 ******* 2026-02-23 20:45:32.647223 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:45:32.647227 | orchestrator | 2026-02-23 20:45:32.647231 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-23 20:45:32.647234 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.538) 0:00:01.342 ******* 2026-02-23 20:45:32.647239 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-23 20:45:32.647243 | orchestrator | 2026-02-23 20:45:32.647246 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-23 20:45:32.647252 | orchestrator | Monday 23 February 2026 20:43:32 +0000 (0:00:03.720) 0:00:05.062 ******* 2026-02-23 20:45:32.647258 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-23 20:45:32.647268 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-23 20:45:32.647476 | orchestrator | 2026-02-23 20:45:32.647488 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-23 20:45:32.647492 | orchestrator | Monday 23 February 2026 20:43:39 +0000 (0:00:07.329) 0:00:12.391 ******* 2026-02-23 20:45:32.647496 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-23 20:45:32.647500 | orchestrator | 2026-02-23 20:45:32.647504 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-23 20:45:32.647508 | orchestrator | Monday 23 February 2026 20:43:42 +0000 (0:00:03.088) 0:00:15.480 ******* 2026-02-23 20:45:32.647512 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-23 20:45:32.647516 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:45:32.647520 | orchestrator | 2026-02-23 20:45:32.647524 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-23 20:45:32.647527 | orchestrator | Monday 23 February 2026 20:43:47 +0000 (0:00:04.635) 0:00:20.116 ******* 2026-02-23 20:45:32.647531 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:45:32.647542 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-23 20:45:32.647546 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-23 20:45:32.647549 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-23 20:45:32.647553 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-23 20:45:32.647557 | orchestrator | 2026-02-23 20:45:32.647561 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-23 20:45:32.647565 | orchestrator | Monday 23 February 2026 20:44:05 +0000 (0:00:17.889) 0:00:38.006 ******* 2026-02-23 20:45:32.647568 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-23 20:45:32.647572 | orchestrator | 2026-02-23 20:45:32.647576 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-23 20:45:32.647580 | orchestrator | Monday 23 February 2026 20:44:09 +0000 (0:00:04.005) 0:00:42.011 ******* 2026-02-23 20:45:32.647585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.647602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.647606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.647611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647645 | orchestrator | 2026-02-23 20:45:32.647649 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-23 20:45:32.647653 | orchestrator | Monday 23 February 2026 20:44:11 +0000 (0:00:02.184) 0:00:44.196 ******* 2026-02-23 20:45:32.647657 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-23 20:45:32.647661 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-23 20:45:32.647664 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-23 20:45:32.647668 | orchestrator | 2026-02-23 20:45:32.647672 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-23 20:45:32.647676 | orchestrator | Monday 23 February 2026 20:44:13 +0000 (0:00:01.545) 0:00:45.742 ******* 2026-02-23 20:45:32.647680 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.647684 | orchestrator | 2026-02-23 20:45:32.647687 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-23 20:45:32.647691 | orchestrator | Monday 23 February 2026 20:44:13 +0000 (0:00:00.177) 0:00:45.919 ******* 2026-02-23 20:45:32.647695 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.647699 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:45:32.647702 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:45:32.647706 | orchestrator | 2026-02-23 20:45:32.647710 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-23 20:45:32.647714 | orchestrator | Monday 23 February 2026 20:44:13 +0000 (0:00:00.428) 0:00:46.347 ******* 2026-02-23 20:45:32.647719 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:45:32.647723 | orchestrator | 2026-02-23 20:45:32.647727 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-23 20:45:32.647731 | orchestrator | Monday 23 February 2026 20:44:14 +0000 (0:00:00.464) 0:00:46.812 ******* 2026-02-23 20:45:32.647735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.647778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.647784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.647788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.647822 | orchestrator | 2026-02-23 20:45:32.647826 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-23 20:45:32.647830 | orchestrator | Monday 23 February 2026 20:44:18 +0000 (0:00:04.132) 0:00:50.945 ******* 2026-02-23 20:45:32.647834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.647840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647850 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.647857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.647861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647869 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:45:32.647873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.647880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647888 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:45:32.647894 | orchestrator | 2026-02-23 20:45:32.647901 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-23 20:45:32.647910 | orchestrator | Monday 23 February 2026 20:44:19 +0000 (0:00:01.472) 0:00:52.418 ******* 2026-02-23 20:45:32.647922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.647928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647941 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:45:32.647950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.647963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.647977 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:45:32.647987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.647994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648012 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.648018 | orchestrator | 2026-02-23 20:45:32.648024 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-23 20:45:32.648030 | orchestrator | Monday 23 February 2026 20:44:20 +0000 (0:00:00.954) 0:00:53.373 ******* 2026-02-23 20:45:32.648039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648124 | orchestrator | 2026-02-23 20:45:32.648131 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-23 20:45:32.648137 | orchestrator | Monday 23 February 2026 20:44:24 +0000 (0:00:03.441) 0:00:56.814 ******* 2026-02-23 20:45:32.648144 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:45:32.648150 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648157 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:45:32.648161 | orchestrator | 2026-02-23 20:45:32.648165 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-23 20:45:32.648168 | orchestrator | Monday 23 February 2026 20:44:26 +0000 (0:00:02.545) 0:00:59.360 ******* 2026-02-23 20:45:32.648172 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:45:32.648176 | orchestrator | 2026-02-23 20:45:32.648183 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-23 20:45:32.648187 | orchestrator | Monday 23 February 2026 20:44:27 +0000 (0:00:00.992) 0:01:00.352 ******* 2026-02-23 20:45:32.648191 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.648194 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:45:32.648198 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:45:32.648202 | orchestrator | 2026-02-23 20:45:32.648206 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-23 20:45:32.648209 | orchestrator | Monday 23 February 2026 20:44:28 +0000 (0:00:00.855) 0:01:01.208 ******* 2026-02-23 20:45:32.648215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648260 | orchestrator | 2026-02-23 20:45:32.648264 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-23 20:45:32.648270 | orchestrator | Monday 23 February 2026 20:44:38 +0000 (0:00:10.266) 0:01:11.475 ******* 2026-02-23 20:45:32.648274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.648281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648289 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.648295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.648299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648310 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:45:32.648316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-23 20:45:32.648321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:45:32.648330 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:45:32.648334 | orchestrator | 2026-02-23 20:45:32.648338 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-23 20:45:32.648342 | orchestrator | Monday 23 February 2026 20:44:39 +0000 (0:00:00.694) 0:01:12.169 ******* 2026-02-23 20:45:32.648346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-23 20:45:32.648364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:45:32.648436 | orchestrator | 2026-02-23 20:45:32.648440 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-23 20:45:32.648445 | orchestrator | Monday 23 February 2026 20:44:43 +0000 (0:00:03.783) 0:01:15.953 ******* 2026-02-23 20:45:32.648449 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:45:32.648453 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:45:32.648457 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:45:32.648462 | orchestrator | 2026-02-23 20:45:32.648466 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-23 20:45:32.648470 | orchestrator | Monday 23 February 2026 20:44:43 +0000 (0:00:00.580) 0:01:16.533 ******* 2026-02-23 20:45:32.648475 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648479 | orchestrator | 2026-02-23 20:45:32.648483 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-23 20:45:32.648487 | orchestrator | Monday 23 February 2026 20:44:45 +0000 (0:00:02.056) 0:01:18.590 ******* 2026-02-23 20:45:32.648492 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648496 | orchestrator | 2026-02-23 20:45:32.648500 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-23 20:45:32.648505 | orchestrator | Monday 23 February 2026 20:44:48 +0000 (0:00:02.455) 0:01:21.045 ******* 2026-02-23 20:45:32.648509 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648514 | orchestrator | 2026-02-23 20:45:32.648518 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-23 20:45:32.648522 | orchestrator | Monday 23 February 2026 20:45:00 +0000 (0:00:12.172) 0:01:33.218 ******* 2026-02-23 20:45:32.648527 | orchestrator | 2026-02-23 20:45:32.648531 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-23 20:45:32.648535 | orchestrator | Monday 23 February 2026 20:45:00 +0000 (0:00:00.125) 0:01:33.344 ******* 2026-02-23 20:45:32.648539 | orchestrator | 2026-02-23 20:45:32.648544 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-23 20:45:32.648548 | orchestrator | Monday 23 February 2026 20:45:00 +0000 (0:00:00.123) 0:01:33.467 ******* 2026-02-23 20:45:32.648552 | orchestrator | 2026-02-23 20:45:32.648556 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-23 20:45:32.648562 | orchestrator | Monday 23 February 2026 20:45:00 +0000 (0:00:00.142) 0:01:33.610 ******* 2026-02-23 20:45:32.648566 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648570 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:45:32.648573 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:45:32.648577 | orchestrator | 2026-02-23 20:45:32.648581 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-23 20:45:32.648585 | orchestrator | Monday 23 February 2026 20:45:11 +0000 (0:00:10.716) 0:01:44.326 ******* 2026-02-23 20:45:32.648589 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:45:32.648592 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648599 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:45:32.648602 | orchestrator | 2026-02-23 20:45:32.648606 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-23 20:45:32.648610 | orchestrator | Monday 23 February 2026 20:45:21 +0000 (0:00:10.107) 0:01:54.434 ******* 2026-02-23 20:45:32.648614 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:45:32.648618 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:45:32.648622 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:45:32.648625 | orchestrator | 2026-02-23 20:45:32.648629 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:45:32.648633 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:45:32.648638 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-23 20:45:32.648642 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-23 20:45:32.648645 | orchestrator | 2026-02-23 20:45:32.648649 | orchestrator | 2026-02-23 20:45:32.648653 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:45:32.648657 | orchestrator | Monday 23 February 2026 20:45:31 +0000 (0:00:09.818) 0:02:04.252 ******* 2026-02-23 20:45:32.648661 | orchestrator | =============================================================================== 2026-02-23 20:45:32.648665 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.89s 2026-02-23 20:45:32.648672 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.17s 2026-02-23 20:45:32.648676 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.72s 2026-02-23 20:45:32.648680 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.27s 2026-02-23 20:45:32.648683 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.11s 2026-02-23 20:45:32.648687 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.82s 2026-02-23 20:45:32.648691 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.33s 2026-02-23 20:45:32.648695 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.64s 2026-02-23 20:45:32.648699 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.13s 2026-02-23 20:45:32.648702 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.00s 2026-02-23 20:45:32.648706 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.78s 2026-02-23 20:45:32.648710 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.72s 2026-02-23 20:45:32.648714 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.44s 2026-02-23 20:45:32.648717 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.09s 2026-02-23 20:45:32.648721 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.55s 2026-02-23 20:45:32.648725 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.46s 2026-02-23 20:45:32.648729 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.19s 2026-02-23 20:45:32.648732 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.06s 2026-02-23 20:45:32.648736 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.54s 2026-02-23 20:45:32.648740 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.47s 2026-02-23 20:45:32.648744 | orchestrator | 2026-02-23 20:45:32 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:32.648748 | orchestrator | 2026-02-23 20:45:32 | INFO  | Task 8595cf83-dd02-460a-9c1a-06b517cb0c90 is in state SUCCESS 2026-02-23 20:45:32.648754 | orchestrator | 2026-02-23 20:45:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:35.675658 | orchestrator | 2026-02-23 20:45:35 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:35.676199 | orchestrator | 2026-02-23 20:45:35 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:35.676851 | orchestrator | 2026-02-23 20:45:35 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:35.677663 | orchestrator | 2026-02-23 20:45:35 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:35.677687 | orchestrator | 2026-02-23 20:45:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:38.699892 | orchestrator | 2026-02-23 20:45:38 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:38.702525 | orchestrator | 2026-02-23 20:45:38 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:38.703105 | orchestrator | 2026-02-23 20:45:38 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:38.704007 | orchestrator | 2026-02-23 20:45:38 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:38.704074 | orchestrator | 2026-02-23 20:45:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:41.724233 | orchestrator | 2026-02-23 20:45:41 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:41.724897 | orchestrator | 2026-02-23 20:45:41 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:41.725642 | orchestrator | 2026-02-23 20:45:41 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:41.726404 | orchestrator | 2026-02-23 20:45:41 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:41.726449 | orchestrator | 2026-02-23 20:45:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:44.748569 | orchestrator | 2026-02-23 20:45:44 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:44.751313 | orchestrator | 2026-02-23 20:45:44 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:44.751833 | orchestrator | 2026-02-23 20:45:44 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:44.752478 | orchestrator | 2026-02-23 20:45:44 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:44.752496 | orchestrator | 2026-02-23 20:45:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:47.799380 | orchestrator | 2026-02-23 20:45:47 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:47.802147 | orchestrator | 2026-02-23 20:45:47 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:47.805817 | orchestrator | 2026-02-23 20:45:47 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:47.807259 | orchestrator | 2026-02-23 20:45:47 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:47.807292 | orchestrator | 2026-02-23 20:45:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:50.850221 | orchestrator | 2026-02-23 20:45:50 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:50.851827 | orchestrator | 2026-02-23 20:45:50 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:50.853597 | orchestrator | 2026-02-23 20:45:50 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:50.855282 | orchestrator | 2026-02-23 20:45:50 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:50.855382 | orchestrator | 2026-02-23 20:45:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:53.897923 | orchestrator | 2026-02-23 20:45:53 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:53.899931 | orchestrator | 2026-02-23 20:45:53 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:53.902268 | orchestrator | 2026-02-23 20:45:53 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:53.904420 | orchestrator | 2026-02-23 20:45:53 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:53.904468 | orchestrator | 2026-02-23 20:45:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:56.948670 | orchestrator | 2026-02-23 20:45:56 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:56.950645 | orchestrator | 2026-02-23 20:45:56 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:45:56.952314 | orchestrator | 2026-02-23 20:45:56 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:45:56.954547 | orchestrator | 2026-02-23 20:45:56 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:45:56.954605 | orchestrator | 2026-02-23 20:45:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:45:59.998522 | orchestrator | 2026-02-23 20:45:59 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:45:59.999023 | orchestrator | 2026-02-23 20:45:59 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:00.000353 | orchestrator | 2026-02-23 20:46:00 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:46:00.005907 | orchestrator | 2026-02-23 20:46:00 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:00.005967 | orchestrator | 2026-02-23 20:46:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:03.039720 | orchestrator | 2026-02-23 20:46:03 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:46:03.040143 | orchestrator | 2026-02-23 20:46:03 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:03.042136 | orchestrator | 2026-02-23 20:46:03 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:46:03.042628 | orchestrator | 2026-02-23 20:46:03 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:03.042653 | orchestrator | 2026-02-23 20:46:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:06.082319 | orchestrator | 2026-02-23 20:46:06 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state STARTED 2026-02-23 20:46:06.084063 | orchestrator | 2026-02-23 20:46:06 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:06.085434 | orchestrator | 2026-02-23 20:46:06 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:46:06.086859 | orchestrator | 2026-02-23 20:46:06 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:06.086897 | orchestrator | 2026-02-23 20:46:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:09.126824 | orchestrator | 2026-02-23 20:46:09 | INFO  | Task fa709176-630f-456f-8ba0-d1f4d6d47f66 is in state SUCCESS 2026-02-23 20:46:09.127531 | orchestrator | 2026-02-23 20:46:09.127578 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-23 20:46:09.127587 | orchestrator | 2.16.14 2026-02-23 20:46:09.127594 | orchestrator | 2026-02-23 20:46:09.127600 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-23 20:46:09.127607 | orchestrator | 2026-02-23 20:46:09.127613 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-23 20:46:09.127620 | orchestrator | Monday 23 February 2026 20:43:58 +0000 (0:00:00.216) 0:00:00.216 ******* 2026-02-23 20:46:09.127626 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127632 | orchestrator | 2026-02-23 20:46:09.127638 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-23 20:46:09.127644 | orchestrator | Monday 23 February 2026 20:43:59 +0000 (0:00:01.477) 0:00:01.693 ******* 2026-02-23 20:46:09.127650 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127656 | orchestrator | 2026-02-23 20:46:09.127662 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-23 20:46:09.127668 | orchestrator | Monday 23 February 2026 20:44:00 +0000 (0:00:01.113) 0:00:02.806 ******* 2026-02-23 20:46:09.127674 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127679 | orchestrator | 2026-02-23 20:46:09.127685 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-23 20:46:09.127691 | orchestrator | Monday 23 February 2026 20:44:01 +0000 (0:00:00.953) 0:00:03.760 ******* 2026-02-23 20:46:09.127697 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127703 | orchestrator | 2026-02-23 20:46:09.127709 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-23 20:46:09.127715 | orchestrator | Monday 23 February 2026 20:44:02 +0000 (0:00:01.075) 0:00:04.836 ******* 2026-02-23 20:46:09.127720 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127726 | orchestrator | 2026-02-23 20:46:09.127732 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-23 20:46:09.127738 | orchestrator | Monday 23 February 2026 20:44:03 +0000 (0:00:00.958) 0:00:05.794 ******* 2026-02-23 20:46:09.127744 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127749 | orchestrator | 2026-02-23 20:46:09.127755 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-23 20:46:09.127761 | orchestrator | Monday 23 February 2026 20:44:05 +0000 (0:00:01.231) 0:00:07.025 ******* 2026-02-23 20:46:09.127767 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127772 | orchestrator | 2026-02-23 20:46:09.127778 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-23 20:46:09.127784 | orchestrator | Monday 23 February 2026 20:44:07 +0000 (0:00:02.028) 0:00:09.053 ******* 2026-02-23 20:46:09.127790 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127796 | orchestrator | 2026-02-23 20:46:09.127802 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-23 20:46:09.127808 | orchestrator | Monday 23 February 2026 20:44:08 +0000 (0:00:01.028) 0:00:10.082 ******* 2026-02-23 20:46:09.127814 | orchestrator | changed: [testbed-manager] 2026-02-23 20:46:09.127820 | orchestrator | 2026-02-23 20:46:09.127826 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-23 20:46:09.127832 | orchestrator | Monday 23 February 2026 20:45:06 +0000 (0:00:58.730) 0:01:08.812 ******* 2026-02-23 20:46:09.127837 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:46:09.127843 | orchestrator | 2026-02-23 20:46:09.127848 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-23 20:46:09.127854 | orchestrator | 2026-02-23 20:46:09.127868 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-23 20:46:09.127874 | orchestrator | Monday 23 February 2026 20:45:06 +0000 (0:00:00.106) 0:01:08.919 ******* 2026-02-23 20:46:09.127880 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:09.127886 | orchestrator | 2026-02-23 20:46:09.127891 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-23 20:46:09.127907 | orchestrator | 2026-02-23 20:46:09.127913 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-23 20:46:09.127919 | orchestrator | Monday 23 February 2026 20:45:18 +0000 (0:00:11.462) 0:01:20.381 ******* 2026-02-23 20:46:09.127924 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:09.127930 | orchestrator | 2026-02-23 20:46:09.127936 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-23 20:46:09.127942 | orchestrator | 2026-02-23 20:46:09.127948 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-23 20:46:09.127953 | orchestrator | Monday 23 February 2026 20:45:29 +0000 (0:00:11.239) 0:01:31.621 ******* 2026-02-23 20:46:09.127959 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:09.127964 | orchestrator | 2026-02-23 20:46:09.127998 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:46:09.128005 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-23 20:46:09.128012 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:46:09.128018 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:46:09.128023 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:46:09.128029 | orchestrator | 2026-02-23 20:46:09.128035 | orchestrator | 2026-02-23 20:46:09.128041 | orchestrator | 2026-02-23 20:46:09.128047 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:46:09.128052 | orchestrator | Monday 23 February 2026 20:45:30 +0000 (0:00:01.233) 0:01:32.855 ******* 2026-02-23 20:46:09.128058 | orchestrator | =============================================================================== 2026-02-23 20:46:09.128064 | orchestrator | Create admin user ------------------------------------------------------ 58.73s 2026-02-23 20:46:09.128079 | orchestrator | Restart ceph manager service ------------------------------------------- 23.94s 2026-02-23 20:46:09.128084 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.03s 2026-02-23 20:46:09.128090 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.48s 2026-02-23 20:46:09.128095 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.23s 2026-02-23 20:46:09.128101 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2026-02-23 20:46:09.128106 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.08s 2026-02-23 20:46:09.128112 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.03s 2026-02-23 20:46:09.128118 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.96s 2026-02-23 20:46:09.128124 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.95s 2026-02-23 20:46:09.128130 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2026-02-23 20:46:09.128135 | orchestrator | 2026-02-23 20:46:09.128142 | orchestrator | 2026-02-23 20:46:09.128148 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:46:09.128154 | orchestrator | 2026-02-23 20:46:09.128161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:46:09.128167 | orchestrator | Monday 23 February 2026 20:44:58 +0000 (0:00:00.541) 0:00:00.541 ******* 2026-02-23 20:46:09.128174 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:46:09.128180 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:46:09.128186 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:46:09.128192 | orchestrator | 2026-02-23 20:46:09.128199 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:46:09.128205 | orchestrator | Monday 23 February 2026 20:44:59 +0000 (0:00:00.381) 0:00:00.923 ******* 2026-02-23 20:46:09.128216 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-23 20:46:09.128223 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-23 20:46:09.128230 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-23 20:46:09.128236 | orchestrator | 2026-02-23 20:46:09.128242 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-23 20:46:09.128249 | orchestrator | 2026-02-23 20:46:09.128255 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-23 20:46:09.128261 | orchestrator | Monday 23 February 2026 20:44:59 +0000 (0:00:00.364) 0:00:01.287 ******* 2026-02-23 20:46:09.128268 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:46:09.128275 | orchestrator | 2026-02-23 20:46:09.128281 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-23 20:46:09.128288 | orchestrator | Monday 23 February 2026 20:45:00 +0000 (0:00:00.613) 0:00:01.900 ******* 2026-02-23 20:46:09.128295 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-23 20:46:09.128301 | orchestrator | 2026-02-23 20:46:09.128308 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-23 20:46:09.128314 | orchestrator | Monday 23 February 2026 20:45:04 +0000 (0:00:03.925) 0:00:05.826 ******* 2026-02-23 20:46:09.128320 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-23 20:46:09.128329 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-23 20:46:09.128334 | orchestrator | 2026-02-23 20:46:09.128339 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-23 20:46:09.128345 | orchestrator | Monday 23 February 2026 20:45:11 +0000 (0:00:07.043) 0:00:12.869 ******* 2026-02-23 20:46:09.128349 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:46:09.128354 | orchestrator | 2026-02-23 20:46:09.128360 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-23 20:46:09.128367 | orchestrator | Monday 23 February 2026 20:45:15 +0000 (0:00:03.968) 0:00:16.839 ******* 2026-02-23 20:46:09.128373 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-23 20:46:09.128379 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:46:09.128386 | orchestrator | 2026-02-23 20:46:09.128392 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-23 20:46:09.128399 | orchestrator | Monday 23 February 2026 20:45:18 +0000 (0:00:03.777) 0:00:20.616 ******* 2026-02-23 20:46:09.128405 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:46:09.128412 | orchestrator | 2026-02-23 20:46:09.128418 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-23 20:46:09.128425 | orchestrator | Monday 23 February 2026 20:45:22 +0000 (0:00:03.492) 0:00:24.109 ******* 2026-02-23 20:46:09.128431 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-23 20:46:09.128438 | orchestrator | 2026-02-23 20:46:09.128444 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-23 20:46:09.128451 | orchestrator | Monday 23 February 2026 20:45:25 +0000 (0:00:03.563) 0:00:27.673 ******* 2026-02-23 20:46:09.128458 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:09.128464 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:09.128470 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:09.128476 | orchestrator | 2026-02-23 20:46:09.128483 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-23 20:46:09.128489 | orchestrator | Monday 23 February 2026 20:45:26 +0000 (0:00:00.242) 0:00:27.915 ******* 2026-02-23 20:46:09.128505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128537 | orchestrator | 2026-02-23 20:46:09.128543 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-23 20:46:09.128550 | orchestrator | Monday 23 February 2026 20:45:26 +0000 (0:00:00.822) 0:00:28.737 ******* 2026-02-23 20:46:09.128557 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:09.128563 | orchestrator | 2026-02-23 20:46:09.128569 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-23 20:46:09.128575 | orchestrator | Monday 23 February 2026 20:45:27 +0000 (0:00:00.152) 0:00:28.890 ******* 2026-02-23 20:46:09.128581 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:09.128587 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:09.128592 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:09.128598 | orchestrator | 2026-02-23 20:46:09.128604 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-23 20:46:09.128610 | orchestrator | Monday 23 February 2026 20:45:27 +0000 (0:00:00.834) 0:00:29.724 ******* 2026-02-23 20:46:09.128616 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:46:09.128622 | orchestrator | 2026-02-23 20:46:09.128627 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-23 20:46:09.128633 | orchestrator | Monday 23 February 2026 20:45:28 +0000 (0:00:00.490) 0:00:30.214 ******* 2026-02-23 20:46:09.128643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128670 | orchestrator | 2026-02-23 20:46:09.128676 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-23 20:46:09.128682 | orchestrator | Monday 23 February 2026 20:45:30 +0000 (0:00:02.023) 0:00:32.238 ******* 2026-02-23 20:46:09.128697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.128707 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:09.128713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.128722 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:09.128733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.128740 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:09.128746 | orchestrator | 2026-02-23 20:46:09.128752 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-23 20:46:09.128758 | orchestrator | Monday 23 February 2026 20:45:31 +0000 (0:00:01.328) 0:00:33.567 ******* 2026-02-23 20:46:09.128764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.128770 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:09.128779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.128785 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:09.128795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.128801 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:09.128806 | orchestrator | 2026-02-23 20:46:09.128812 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-23 20:46:09.128818 | orchestrator | Monday 23 February 2026 20:45:33 +0000 (0:00:01.436) 0:00:35.004 ******* 2026-02-23 20:46:09.128828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128850 | orchestrator | 2026-02-23 20:46:09.128855 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-23 20:46:09.128867 | orchestrator | Monday 23 February 2026 20:45:34 +0000 (0:00:01.672) 0:00:36.676 ******* 2026-02-23 20:46:09.128873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.128896 | orchestrator | 2026-02-23 20:46:09.128902 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-23 20:46:09.128907 | orchestrator | Monday 23 February 2026 20:45:39 +0000 (0:00:04.372) 0:00:41.049 ******* 2026-02-23 20:46:09.128913 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-23 20:46:09.128919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-23 20:46:09.128925 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-23 20:46:09.128931 | orchestrator | 2026-02-23 20:46:09.128937 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-23 20:46:09.128942 | orchestrator | Monday 23 February 2026 20:45:41 +0000 (0:00:01.879) 0:00:42.931 ******* 2026-02-23 20:46:09.128948 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:09.128954 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:09.128960 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:09.128965 | orchestrator | 2026-02-23 20:46:09.129160 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-23 20:46:09.129168 | orchestrator | Monday 23 February 2026 20:45:42 +0000 (0:00:01.721) 0:00:44.653 ******* 2026-02-23 20:46:09.129177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.129184 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:09.129195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.129201 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:09.129207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-23 20:46:09.129213 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:09.129219 | orchestrator | 2026-02-23 20:46:09.129224 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-23 20:46:09.129230 | orchestrator | Monday 23 February 2026 20:45:43 +0000 (0:00:00.633) 0:00:45.286 ******* 2026-02-23 20:46:09.129236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.129248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.129255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-23 20:46:09.129261 | orchestrator | 2026-02-23 20:46:09.129267 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-23 20:46:09.129272 | orchestrator | Monday 23 February 2026 20:45:44 +0000 (0:00:01.382) 0:00:46.669 ******* 2026-02-23 20:46:09.129278 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:09.129283 | orchestrator | 2026-02-23 20:46:09.129293 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-23 20:46:09.129299 | orchestrator | Monday 23 February 2026 20:45:47 +0000 (0:00:02.453) 0:00:49.122 ******* 2026-02-23 20:46:09.129305 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:09.129310 | orchestrator | 2026-02-23 20:46:09.129316 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-23 20:46:09.129322 | orchestrator | Monday 23 February 2026 20:45:49 +0000 (0:00:02.364) 0:00:51.487 ******* 2026-02-23 20:46:09.129327 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:09.129333 | orchestrator | 2026-02-23 20:46:09.129338 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-23 20:46:09.129343 | orchestrator | Monday 23 February 2026 20:46:02 +0000 (0:00:12.686) 0:01:04.174 ******* 2026-02-23 20:46:09.129348 | orchestrator | 2026-02-23 20:46:09.129353 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-23 20:46:09.129359 | orchestrator | Monday 23 February 2026 20:46:02 +0000 (0:00:00.075) 0:01:04.249 ******* 2026-02-23 20:46:09.129365 | orchestrator | 2026-02-23 20:46:09.129371 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-23 20:46:09.129377 | orchestrator | Monday 23 February 2026 20:46:02 +0000 (0:00:00.062) 0:01:04.311 ******* 2026-02-23 20:46:09.129383 | orchestrator | 2026-02-23 20:46:09.129388 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-23 20:46:09.129394 | orchestrator | Monday 23 February 2026 20:46:02 +0000 (0:00:00.061) 0:01:04.373 ******* 2026-02-23 20:46:09.129404 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:09.129410 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:09.129416 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:09.129422 | orchestrator | 2026-02-23 20:46:09.129427 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:46:09.129434 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-23 20:46:09.129440 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:46:09.129447 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:46:09.129453 | orchestrator | 2026-02-23 20:46:09.129458 | orchestrator | 2026-02-23 20:46:09.129464 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:46:09.129470 | orchestrator | Monday 23 February 2026 20:46:07 +0000 (0:00:04.515) 0:01:08.889 ******* 2026-02-23 20:46:09.129476 | orchestrator | =============================================================================== 2026-02-23 20:46:09.129482 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.69s 2026-02-23 20:46:09.129488 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.04s 2026-02-23 20:46:09.129494 | orchestrator | placement : Restart placement-api container ----------------------------- 4.52s 2026-02-23 20:46:09.129500 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.37s 2026-02-23 20:46:09.129506 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.97s 2026-02-23 20:46:09.129514 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.93s 2026-02-23 20:46:09.129520 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.78s 2026-02-23 20:46:09.129526 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.56s 2026-02-23 20:46:09.129532 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.49s 2026-02-23 20:46:09.129538 | orchestrator | placement : Creating placement databases -------------------------------- 2.45s 2026-02-23 20:46:09.129544 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.37s 2026-02-23 20:46:09.129550 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.02s 2026-02-23 20:46:09.129556 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.88s 2026-02-23 20:46:09.129562 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.72s 2026-02-23 20:46:09.129568 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-02-23 20:46:09.129574 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.44s 2026-02-23 20:46:09.129580 | orchestrator | placement : Check placement containers ---------------------------------- 1.38s 2026-02-23 20:46:09.129586 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.33s 2026-02-23 20:46:09.129592 | orchestrator | placement : Set placement policy file ----------------------------------- 0.83s 2026-02-23 20:46:09.129598 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.82s 2026-02-23 20:46:09.129604 | orchestrator | 2026-02-23 20:46:09 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:09.131461 | orchestrator | 2026-02-23 20:46:09 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:46:09.134232 | orchestrator | 2026-02-23 20:46:09 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:09.136417 | orchestrator | 2026-02-23 20:46:09 | INFO  | Task 4b91a851-6436-46e3-a263-0c6b00d50cdb is in state STARTED 2026-02-23 20:46:09.136657 | orchestrator | 2026-02-23 20:46:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:12.181657 | orchestrator | 2026-02-23 20:46:12 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:12.182448 | orchestrator | 2026-02-23 20:46:12 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:46:12.183390 | orchestrator | 2026-02-23 20:46:12 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:12.184368 | orchestrator | 2026-02-23 20:46:12 | INFO  | Task 4b91a851-6436-46e3-a263-0c6b00d50cdb is in state STARTED 2026-02-23 20:46:12.184390 | orchestrator | 2026-02-23 20:46:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:15.229945 | orchestrator | 2026-02-23 20:46:15 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:15.231114 | orchestrator | 2026-02-23 20:46:15 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:15.231777 | orchestrator | 2026-02-23 20:46:15 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state STARTED 2026-02-23 20:46:15.233190 | orchestrator | 2026-02-23 20:46:15 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:15.233841 | orchestrator | 2026-02-23 20:46:15 | INFO  | Task 4b91a851-6436-46e3-a263-0c6b00d50cdb is in state SUCCESS 2026-02-23 20:46:15.233910 | orchestrator | 2026-02-23 20:46:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:18.272883 | orchestrator | 2026-02-23 20:46:18 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:18.273247 | orchestrator | 2026-02-23 20:46:18 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:18.275115 | orchestrator | 2026-02-23 20:46:18 | INFO  | Task b547fcda-b4ec-4cbb-bd52-fabac922640a is in state SUCCESS 2026-02-23 20:46:18.276367 | orchestrator | 2026-02-23 20:46:18.276399 | orchestrator | 2026-02-23 20:46:18.276404 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:46:18.276409 | orchestrator | 2026-02-23 20:46:18.276413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:46:18.276417 | orchestrator | Monday 23 February 2026 20:46:11 +0000 (0:00:00.155) 0:00:00.155 ******* 2026-02-23 20:46:18.276421 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:46:18.276425 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:46:18.276429 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:46:18.276433 | orchestrator | 2026-02-23 20:46:18.276437 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:46:18.276441 | orchestrator | Monday 23 February 2026 20:46:11 +0000 (0:00:00.283) 0:00:00.438 ******* 2026-02-23 20:46:18.276445 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-23 20:46:18.276448 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-23 20:46:18.276464 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-23 20:46:18.276469 | orchestrator | 2026-02-23 20:46:18.276473 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-23 20:46:18.276477 | orchestrator | 2026-02-23 20:46:18.276481 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-23 20:46:18.276484 | orchestrator | Monday 23 February 2026 20:46:11 +0000 (0:00:00.537) 0:00:00.976 ******* 2026-02-23 20:46:18.276488 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:46:18.276492 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:46:18.276496 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:46:18.276499 | orchestrator | 2026-02-23 20:46:18.276503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:46:18.276508 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:46:18.276523 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:46:18.276527 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:46:18.276531 | orchestrator | 2026-02-23 20:46:18.276535 | orchestrator | 2026-02-23 20:46:18.276539 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:46:18.276542 | orchestrator | Monday 23 February 2026 20:46:12 +0000 (0:00:00.566) 0:00:01.543 ******* 2026-02-23 20:46:18.276583 | orchestrator | =============================================================================== 2026-02-23 20:46:18.276587 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.57s 2026-02-23 20:46:18.276591 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-02-23 20:46:18.276595 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-02-23 20:46:18.276599 | orchestrator | 2026-02-23 20:46:18.276603 | orchestrator | 2026-02-23 20:46:18.276606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:46:18.276610 | orchestrator | 2026-02-23 20:46:18.276614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:46:18.276620 | orchestrator | Monday 23 February 2026 20:43:27 +0000 (0:00:00.224) 0:00:00.224 ******* 2026-02-23 20:46:18.276626 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:46:18.276632 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:46:18.276637 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:46:18.276643 | orchestrator | 2026-02-23 20:46:18.276649 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:46:18.276654 | orchestrator | Monday 23 February 2026 20:43:27 +0000 (0:00:00.216) 0:00:00.440 ******* 2026-02-23 20:46:18.276660 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-23 20:46:18.276667 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-23 20:46:18.276673 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-23 20:46:18.276679 | orchestrator | 2026-02-23 20:46:18.276684 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-23 20:46:18.276690 | orchestrator | 2026-02-23 20:46:18.276696 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-23 20:46:18.276702 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.398) 0:00:00.839 ******* 2026-02-23 20:46:18.276708 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:46:18.276714 | orchestrator | 2026-02-23 20:46:18.276720 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-23 20:46:18.276727 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.620) 0:00:01.459 ******* 2026-02-23 20:46:18.276733 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-23 20:46:18.276739 | orchestrator | 2026-02-23 20:46:18.276745 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-23 20:46:18.276751 | orchestrator | Monday 23 February 2026 20:43:32 +0000 (0:00:03.731) 0:00:05.191 ******* 2026-02-23 20:46:18.276757 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-23 20:46:18.276803 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-23 20:46:18.276811 | orchestrator | 2026-02-23 20:46:18.276817 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-23 20:46:18.276823 | orchestrator | Monday 23 February 2026 20:43:39 +0000 (0:00:07.432) 0:00:12.624 ******* 2026-02-23 20:46:18.276830 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:46:18.276836 | orchestrator | 2026-02-23 20:46:18.276843 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-23 20:46:18.276857 | orchestrator | Monday 23 February 2026 20:43:43 +0000 (0:00:03.147) 0:00:15.771 ******* 2026-02-23 20:46:18.276869 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-23 20:46:18.276873 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:46:18.276877 | orchestrator | 2026-02-23 20:46:18.276881 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-23 20:46:18.276885 | orchestrator | Monday 23 February 2026 20:43:47 +0000 (0:00:04.524) 0:00:20.296 ******* 2026-02-23 20:46:18.276889 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:46:18.276893 | orchestrator | 2026-02-23 20:46:18.276897 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-23 20:46:18.276900 | orchestrator | Monday 23 February 2026 20:43:51 +0000 (0:00:03.822) 0:00:24.119 ******* 2026-02-23 20:46:18.276904 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-23 20:46:18.276908 | orchestrator | 2026-02-23 20:46:18.276912 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-23 20:46:18.276919 | orchestrator | Monday 23 February 2026 20:43:55 +0000 (0:00:04.182) 0:00:28.301 ******* 2026-02-23 20:46:18.276925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.276931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.276935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.276940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.276964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.276971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.276978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.276985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.276990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.276994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277231 | orchestrator | 2026-02-23 20:46:18.277235 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-23 20:46:18.277240 | orchestrator | Monday 23 February 2026 20:43:58 +0000 (0:00:02.995) 0:00:31.297 ******* 2026-02-23 20:46:18.277244 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.277248 | orchestrator | 2026-02-23 20:46:18.277252 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-23 20:46:18.277257 | orchestrator | Monday 23 February 2026 20:43:58 +0000 (0:00:00.121) 0:00:31.418 ******* 2026-02-23 20:46:18.277261 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.277265 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:18.277270 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:18.277274 | orchestrator | 2026-02-23 20:46:18.277278 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-23 20:46:18.277284 | orchestrator | Monday 23 February 2026 20:43:59 +0000 (0:00:00.342) 0:00:31.760 ******* 2026-02-23 20:46:18.277290 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:46:18.277296 | orchestrator | 2026-02-23 20:46:18.277302 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-23 20:46:18.277308 | orchestrator | Monday 23 February 2026 20:43:59 +0000 (0:00:00.659) 0:00:32.420 ******* 2026-02-23 20:46:18.277315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.277322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.277332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.277344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277456 | orchestrator | 2026-02-23 20:46:18.277462 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-23 20:46:18.277468 | orchestrator | Monday 23 February 2026 20:44:05 +0000 (0:00:06.270) 0:00:38.691 ******* 2026-02-23 20:46:18.277477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.277483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.277494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277525 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.277534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.277540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.277547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277566 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:18.277572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.277576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.277586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277611 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:18.277617 | orchestrator | 2026-02-23 20:46:18.277626 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-23 20:46:18.277633 | orchestrator | Monday 23 February 2026 20:44:06 +0000 (0:00:00.884) 0:00:39.575 ******* 2026-02-23 20:46:18.277643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.277648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.277656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277864 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.277875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.277879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.277888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277904 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:18.277911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.277917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.277924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.277940 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:18.277960 | orchestrator | 2026-02-23 20:46:18.277964 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-23 20:46:18.277968 | orchestrator | Monday 23 February 2026 20:44:08 +0000 (0:00:01.169) 0:00:40.744 ******* 2026-02-23 20:46:18.277976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.277987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.277991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.277995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.277999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278112 | orchestrator | 2026-02-23 20:46:18.278116 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-23 20:46:18.278120 | orchestrator | Monday 23 February 2026 20:44:13 +0000 (0:00:05.968) 0:00:46.712 ******* 2026-02-23 20:46:18.278127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.278136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.278140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.278144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278325 | orchestrator | 2026-02-23 20:46:18.278331 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-23 20:46:18.278337 | orchestrator | Monday 23 February 2026 20:44:35 +0000 (0:00:21.911) 0:01:08.624 ******* 2026-02-23 20:46:18.278343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-23 20:46:18.278349 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-23 20:46:18.278525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-23 20:46:18.278532 | orchestrator | 2026-02-23 20:46:18.278536 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-23 20:46:18.278545 | orchestrator | Monday 23 February 2026 20:44:41 +0000 (0:00:05.514) 0:01:14.138 ******* 2026-02-23 20:46:18.278549 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-23 20:46:18.278553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-23 20:46:18.278557 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-23 20:46:18.278560 | orchestrator | 2026-02-23 20:46:18.278564 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-23 20:46:18.278568 | orchestrator | Monday 23 February 2026 20:44:44 +0000 (0:00:03.451) 0:01:17.590 ******* 2026-02-23 20:46:18.278576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278688 | orchestrator | 2026-02-23 20:46:18.278692 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-23 20:46:18.278696 | orchestrator | Monday 23 February 2026 20:44:47 +0000 (0:00:02.899) 0:01:20.489 ******* 2026-02-23 20:46:18.278702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.278854 | orchestrator | 2026-02-23 20:46:18.278859 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-23 20:46:18.278863 | orchestrator | Monday 23 February 2026 20:44:50 +0000 (0:00:02.670) 0:01:23.159 ******* 2026-02-23 20:46:18.278867 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.278871 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:18.278875 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:18.278878 | orchestrator | 2026-02-23 20:46:18.278882 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-23 20:46:18.278886 | orchestrator | Monday 23 February 2026 20:44:50 +0000 (0:00:00.376) 0:01:23.536 ******* 2026-02-23 20:46:18.278890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.278904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.278929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278953 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:18.278960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278975 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.278979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-23 20:46:18.278986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-23 20:46:18.278992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.278996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.279003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.279009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:46:18.279015 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:18.279021 | orchestrator | 2026-02-23 20:46:18.279028 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-23 20:46:18.279034 | orchestrator | Monday 23 February 2026 20:44:51 +0000 (0:00:01.020) 0:01:24.556 ******* 2026-02-23 20:46:18.279041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.279048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.279056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-23 20:46:18.279062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:46:18.279136 | orchestrator | 2026-02-23 20:46:18.279140 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-23 20:46:18.279144 | orchestrator | Monday 23 February 2026 20:44:57 +0000 (0:00:05.339) 0:01:29.896 ******* 2026-02-23 20:46:18.279148 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:46:18.279152 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:46:18.279155 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:46:18.279159 | orchestrator | 2026-02-23 20:46:18.279163 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-23 20:46:18.279167 | orchestrator | Monday 23 February 2026 20:44:57 +0000 (0:00:00.281) 0:01:30.178 ******* 2026-02-23 20:46:18.279171 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-23 20:46:18.279175 | orchestrator | 2026-02-23 20:46:18.279179 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-23 20:46:18.279182 | orchestrator | Monday 23 February 2026 20:44:59 +0000 (0:00:02.155) 0:01:32.333 ******* 2026-02-23 20:46:18.279186 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:46:18.279190 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-23 20:46:18.279194 | orchestrator | 2026-02-23 20:46:18.279198 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-23 20:46:18.279201 | orchestrator | Monday 23 February 2026 20:45:01 +0000 (0:00:02.315) 0:01:34.649 ******* 2026-02-23 20:46:18.279205 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279209 | orchestrator | 2026-02-23 20:46:18.279213 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-23 20:46:18.279217 | orchestrator | Monday 23 February 2026 20:45:17 +0000 (0:00:15.967) 0:01:50.617 ******* 2026-02-23 20:46:18.279220 | orchestrator | 2026-02-23 20:46:18.279224 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-23 20:46:18.279228 | orchestrator | Monday 23 February 2026 20:45:18 +0000 (0:00:00.127) 0:01:50.744 ******* 2026-02-23 20:46:18.279232 | orchestrator | 2026-02-23 20:46:18.279235 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-23 20:46:18.279239 | orchestrator | Monday 23 February 2026 20:45:18 +0000 (0:00:00.138) 0:01:50.882 ******* 2026-02-23 20:46:18.279243 | orchestrator | 2026-02-23 20:46:18.279247 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-23 20:46:18.279253 | orchestrator | Monday 23 February 2026 20:45:18 +0000 (0:00:00.066) 0:01:50.949 ******* 2026-02-23 20:46:18.279257 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:18.279261 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279264 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:18.279268 | orchestrator | 2026-02-23 20:46:18.279272 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-23 20:46:18.279276 | orchestrator | Monday 23 February 2026 20:45:29 +0000 (0:00:11.290) 0:02:02.240 ******* 2026-02-23 20:46:18.279282 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:18.279286 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279289 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:18.279293 | orchestrator | 2026-02-23 20:46:18.279297 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-23 20:46:18.279301 | orchestrator | Monday 23 February 2026 20:45:37 +0000 (0:00:07.505) 0:02:09.745 ******* 2026-02-23 20:46:18.279305 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279308 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:18.279312 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:18.279316 | orchestrator | 2026-02-23 20:46:18.279320 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-23 20:46:18.279324 | orchestrator | Monday 23 February 2026 20:45:44 +0000 (0:00:07.747) 0:02:17.493 ******* 2026-02-23 20:46:18.279328 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:18.279332 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279335 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:18.279339 | orchestrator | 2026-02-23 20:46:18.279345 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-23 20:46:18.279349 | orchestrator | Monday 23 February 2026 20:45:54 +0000 (0:00:09.651) 0:02:27.145 ******* 2026-02-23 20:46:18.279352 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279356 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:18.279360 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:18.279364 | orchestrator | 2026-02-23 20:46:18.279368 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-23 20:46:18.279371 | orchestrator | Monday 23 February 2026 20:45:59 +0000 (0:00:05.022) 0:02:32.167 ******* 2026-02-23 20:46:18.279375 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279379 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:46:18.279383 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:46:18.279387 | orchestrator | 2026-02-23 20:46:18.279390 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-23 20:46:18.279394 | orchestrator | Monday 23 February 2026 20:46:09 +0000 (0:00:10.025) 0:02:42.193 ******* 2026-02-23 20:46:18.279398 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:46:18.279402 | orchestrator | 2026-02-23 20:46:18.279406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:46:18.279410 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:46:18.279414 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-23 20:46:18.279420 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-23 20:46:18.279426 | orchestrator | 2026-02-23 20:46:18.279431 | orchestrator | 2026-02-23 20:46:18.279437 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:46:18.279443 | orchestrator | Monday 23 February 2026 20:46:16 +0000 (0:00:07.039) 0:02:49.232 ******* 2026-02-23 20:46:18.279449 | orchestrator | =============================================================================== 2026-02-23 20:46:18.279454 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.91s 2026-02-23 20:46:18.279464 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.97s 2026-02-23 20:46:18.279470 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.29s 2026-02-23 20:46:18.279476 | orchestrator | designate : Restart designate-worker container ------------------------- 10.03s 2026-02-23 20:46:18.279482 | orchestrator | designate : Restart designate-producer container ------------------------ 9.65s 2026-02-23 20:46:18.279489 | orchestrator | designate : Restart designate-central container ------------------------- 7.75s 2026-02-23 20:46:18.279530 | orchestrator | designate : Restart designate-api container ----------------------------- 7.51s 2026-02-23 20:46:18.279535 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.43s 2026-02-23 20:46:18.279539 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.04s 2026-02-23 20:46:18.279543 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.27s 2026-02-23 20:46:18.279547 | orchestrator | designate : Copying over config.json files for services ----------------- 5.97s 2026-02-23 20:46:18.279551 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.51s 2026-02-23 20:46:18.279554 | orchestrator | designate : Check designate containers ---------------------------------- 5.34s 2026-02-23 20:46:18.279558 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.02s 2026-02-23 20:46:18.279562 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.52s 2026-02-23 20:46:18.279566 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.18s 2026-02-23 20:46:18.279569 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.82s 2026-02-23 20:46:18.279573 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.73s 2026-02-23 20:46:18.279577 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.45s 2026-02-23 20:46:18.279581 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.15s 2026-02-23 20:46:18.279585 | orchestrator | 2026-02-23 20:46:18 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:18.279589 | orchestrator | 2026-02-23 20:46:18 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:18.279596 | orchestrator | 2026-02-23 20:46:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:21.315957 | orchestrator | 2026-02-23 20:46:21 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:21.316783 | orchestrator | 2026-02-23 20:46:21 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:21.318566 | orchestrator | 2026-02-23 20:46:21 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:21.321401 | orchestrator | 2026-02-23 20:46:21 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:21.321454 | orchestrator | 2026-02-23 20:46:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:24.351657 | orchestrator | 2026-02-23 20:46:24 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:24.352155 | orchestrator | 2026-02-23 20:46:24 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:24.352975 | orchestrator | 2026-02-23 20:46:24 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:24.353782 | orchestrator | 2026-02-23 20:46:24 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:24.353828 | orchestrator | 2026-02-23 20:46:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:27.399832 | orchestrator | 2026-02-23 20:46:27 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:27.401015 | orchestrator | 2026-02-23 20:46:27 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:27.402195 | orchestrator | 2026-02-23 20:46:27 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:27.405486 | orchestrator | 2026-02-23 20:46:27 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:27.405538 | orchestrator | 2026-02-23 20:46:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:30.433864 | orchestrator | 2026-02-23 20:46:30 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:30.434986 | orchestrator | 2026-02-23 20:46:30 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:30.435593 | orchestrator | 2026-02-23 20:46:30 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:30.436272 | orchestrator | 2026-02-23 20:46:30 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:30.436292 | orchestrator | 2026-02-23 20:46:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:33.508713 | orchestrator | 2026-02-23 20:46:33 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:33.510671 | orchestrator | 2026-02-23 20:46:33 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:33.513518 | orchestrator | 2026-02-23 20:46:33 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:33.517157 | orchestrator | 2026-02-23 20:46:33 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:33.517208 | orchestrator | 2026-02-23 20:46:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:36.557950 | orchestrator | 2026-02-23 20:46:36 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:36.558970 | orchestrator | 2026-02-23 20:46:36 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:36.560942 | orchestrator | 2026-02-23 20:46:36 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:36.563546 | orchestrator | 2026-02-23 20:46:36 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:36.563582 | orchestrator | 2026-02-23 20:46:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:39.642516 | orchestrator | 2026-02-23 20:46:39 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:39.642614 | orchestrator | 2026-02-23 20:46:39 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:39.643306 | orchestrator | 2026-02-23 20:46:39 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:39.643728 | orchestrator | 2026-02-23 20:46:39 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:39.643857 | orchestrator | 2026-02-23 20:46:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:42.674750 | orchestrator | 2026-02-23 20:46:42 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:42.674904 | orchestrator | 2026-02-23 20:46:42 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:42.675484 | orchestrator | 2026-02-23 20:46:42 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:42.676716 | orchestrator | 2026-02-23 20:46:42 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:42.676753 | orchestrator | 2026-02-23 20:46:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:45.730348 | orchestrator | 2026-02-23 20:46:45 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:45.743085 | orchestrator | 2026-02-23 20:46:45 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:45.743729 | orchestrator | 2026-02-23 20:46:45 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:45.744516 | orchestrator | 2026-02-23 20:46:45 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:45.744557 | orchestrator | 2026-02-23 20:46:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:48.943156 | orchestrator | 2026-02-23 20:46:48 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:48.943215 | orchestrator | 2026-02-23 20:46:48 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:48.943229 | orchestrator | 2026-02-23 20:46:48 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:48.943239 | orchestrator | 2026-02-23 20:46:48 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:48.943248 | orchestrator | 2026-02-23 20:46:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:51.863042 | orchestrator | 2026-02-23 20:46:51 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:51.863094 | orchestrator | 2026-02-23 20:46:51 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:51.865709 | orchestrator | 2026-02-23 20:46:51 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:51.868129 | orchestrator | 2026-02-23 20:46:51 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state STARTED 2026-02-23 20:46:51.875083 | orchestrator | 2026-02-23 20:46:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:54.890468 | orchestrator | 2026-02-23 20:46:54 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:54.890771 | orchestrator | 2026-02-23 20:46:54 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:54.891541 | orchestrator | 2026-02-23 20:46:54 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:54.892026 | orchestrator | 2026-02-23 20:46:54 | INFO  | Task 3cdc6845-a925-43c0-b5b5-0ac5cdd1344e is in state SUCCESS 2026-02-23 20:46:54.892061 | orchestrator | 2026-02-23 20:46:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:46:57.935591 | orchestrator | 2026-02-23 20:46:57 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:46:57.935949 | orchestrator | 2026-02-23 20:46:57 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:46:57.937059 | orchestrator | 2026-02-23 20:46:57 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:46:57.937874 | orchestrator | 2026-02-23 20:46:57 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:46:57.937924 | orchestrator | 2026-02-23 20:46:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:00.971988 | orchestrator | 2026-02-23 20:47:00 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:00.972244 | orchestrator | 2026-02-23 20:47:00 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:00.973699 | orchestrator | 2026-02-23 20:47:00 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:00.974465 | orchestrator | 2026-02-23 20:47:00 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:00.974485 | orchestrator | 2026-02-23 20:47:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:04.006852 | orchestrator | 2026-02-23 20:47:04 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:04.008473 | orchestrator | 2026-02-23 20:47:04 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:04.008949 | orchestrator | 2026-02-23 20:47:04 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:04.008984 | orchestrator | 2026-02-23 20:47:04 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:04.008992 | orchestrator | 2026-02-23 20:47:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:07.077990 | orchestrator | 2026-02-23 20:47:07 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:07.080928 | orchestrator | 2026-02-23 20:47:07 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:07.083403 | orchestrator | 2026-02-23 20:47:07 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:07.084839 | orchestrator | 2026-02-23 20:47:07 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:07.084886 | orchestrator | 2026-02-23 20:47:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:10.116133 | orchestrator | 2026-02-23 20:47:10 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:10.117406 | orchestrator | 2026-02-23 20:47:10 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:10.118349 | orchestrator | 2026-02-23 20:47:10 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:10.120090 | orchestrator | 2026-02-23 20:47:10 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:10.120132 | orchestrator | 2026-02-23 20:47:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:13.162844 | orchestrator | 2026-02-23 20:47:13 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:13.164746 | orchestrator | 2026-02-23 20:47:13 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:13.166823 | orchestrator | 2026-02-23 20:47:13 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:13.168845 | orchestrator | 2026-02-23 20:47:13 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:13.168899 | orchestrator | 2026-02-23 20:47:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:16.224185 | orchestrator | 2026-02-23 20:47:16 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:16.226192 | orchestrator | 2026-02-23 20:47:16 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:16.228699 | orchestrator | 2026-02-23 20:47:16 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:16.229998 | orchestrator | 2026-02-23 20:47:16 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:16.230050 | orchestrator | 2026-02-23 20:47:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:19.278702 | orchestrator | 2026-02-23 20:47:19 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:19.280490 | orchestrator | 2026-02-23 20:47:19 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:19.283674 | orchestrator | 2026-02-23 20:47:19 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:19.286962 | orchestrator | 2026-02-23 20:47:19 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:19.287080 | orchestrator | 2026-02-23 20:47:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:22.333291 | orchestrator | 2026-02-23 20:47:22 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:22.333351 | orchestrator | 2026-02-23 20:47:22 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:22.333359 | orchestrator | 2026-02-23 20:47:22 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:22.333853 | orchestrator | 2026-02-23 20:47:22 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:22.333891 | orchestrator | 2026-02-23 20:47:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:25.374159 | orchestrator | 2026-02-23 20:47:25 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:25.374410 | orchestrator | 2026-02-23 20:47:25 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:25.375354 | orchestrator | 2026-02-23 20:47:25 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:25.375923 | orchestrator | 2026-02-23 20:47:25 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:25.375971 | orchestrator | 2026-02-23 20:47:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:28.418373 | orchestrator | 2026-02-23 20:47:28 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:28.418698 | orchestrator | 2026-02-23 20:47:28 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:28.419605 | orchestrator | 2026-02-23 20:47:28 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:28.420304 | orchestrator | 2026-02-23 20:47:28 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:28.420321 | orchestrator | 2026-02-23 20:47:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:31.438438 | orchestrator | 2026-02-23 20:47:31 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:31.438942 | orchestrator | 2026-02-23 20:47:31 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:31.439599 | orchestrator | 2026-02-23 20:47:31 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:31.441630 | orchestrator | 2026-02-23 20:47:31 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:31.441668 | orchestrator | 2026-02-23 20:47:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:34.473036 | orchestrator | 2026-02-23 20:47:34 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:34.474899 | orchestrator | 2026-02-23 20:47:34 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:34.476329 | orchestrator | 2026-02-23 20:47:34 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:34.479415 | orchestrator | 2026-02-23 20:47:34 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state STARTED 2026-02-23 20:47:34.479474 | orchestrator | 2026-02-23 20:47:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:37.516294 | orchestrator | 2026-02-23 20:47:37 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:37.516475 | orchestrator | 2026-02-23 20:47:37 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:37.517103 | orchestrator | 2026-02-23 20:47:37 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:37.519229 | orchestrator | 2026-02-23 20:47:37 | INFO  | Task ad5b5cf5-9a59-45a0-897b-affc18d2a24f is in state SUCCESS 2026-02-23 20:47:37.520455 | orchestrator | 2026-02-23 20:47:37.520492 | orchestrator | 2026-02-23 20:47:37.520499 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:47:37.520506 | orchestrator | 2026-02-23 20:47:37.520512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:47:37.520518 | orchestrator | Monday 23 February 2026 20:46:21 +0000 (0:00:00.345) 0:00:00.345 ******* 2026-02-23 20:47:37.520524 | orchestrator | ok: [testbed-manager] 2026-02-23 20:47:37.520530 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:37.520536 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:37.520541 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:37.520547 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:47:37.520552 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:47:37.520558 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:47:37.520564 | orchestrator | 2026-02-23 20:47:37.520569 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:47:37.520575 | orchestrator | Monday 23 February 2026 20:46:22 +0000 (0:00:00.758) 0:00:01.104 ******* 2026-02-23 20:47:37.520580 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520586 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520591 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520596 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520602 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520607 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520613 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-23 20:47:37.520618 | orchestrator | 2026-02-23 20:47:37.520624 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-23 20:47:37.520629 | orchestrator | 2026-02-23 20:47:37.520635 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-23 20:47:37.520640 | orchestrator | Monday 23 February 2026 20:46:23 +0000 (0:00:00.677) 0:00:01.782 ******* 2026-02-23 20:47:37.520646 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:47:37.520652 | orchestrator | 2026-02-23 20:47:37.520700 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-23 20:47:37.520707 | orchestrator | Monday 23 February 2026 20:46:24 +0000 (0:00:01.549) 0:00:03.331 ******* 2026-02-23 20:47:37.520713 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-23 20:47:37.520719 | orchestrator | 2026-02-23 20:47:37.520725 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-23 20:47:37.520756 | orchestrator | Monday 23 February 2026 20:46:28 +0000 (0:00:03.646) 0:00:06.977 ******* 2026-02-23 20:47:37.520762 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-23 20:47:37.520777 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-23 20:47:37.520783 | orchestrator | 2026-02-23 20:47:37.520789 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-23 20:47:37.520794 | orchestrator | Monday 23 February 2026 20:46:34 +0000 (0:00:05.468) 0:00:12.446 ******* 2026-02-23 20:47:37.520845 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-23 20:47:37.520852 | orchestrator | 2026-02-23 20:47:37.520858 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-23 20:47:37.520863 | orchestrator | Monday 23 February 2026 20:46:37 +0000 (0:00:03.356) 0:00:15.803 ******* 2026-02-23 20:47:37.520869 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-23 20:47:37.520875 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:47:37.520880 | orchestrator | 2026-02-23 20:47:37.520886 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-23 20:47:37.520892 | orchestrator | Monday 23 February 2026 20:46:40 +0000 (0:00:03.569) 0:00:19.372 ******* 2026-02-23 20:47:37.520897 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-23 20:47:37.520903 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-23 20:47:37.520909 | orchestrator | 2026-02-23 20:47:37.520915 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-23 20:47:37.520921 | orchestrator | Monday 23 February 2026 20:46:47 +0000 (0:00:06.142) 0:00:25.515 ******* 2026-02-23 20:47:37.520926 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-23 20:47:37.520932 | orchestrator | 2026-02-23 20:47:37.520938 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:47:37.520944 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520950 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520956 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520962 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520968 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520982 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520988 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:47:37.520993 | orchestrator | 2026-02-23 20:47:37.520999 | orchestrator | 2026-02-23 20:47:37.521005 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:47:37.521010 | orchestrator | Monday 23 February 2026 20:46:53 +0000 (0:00:06.781) 0:00:32.297 ******* 2026-02-23 20:47:37.521016 | orchestrator | =============================================================================== 2026-02-23 20:47:37.521022 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.78s 2026-02-23 20:47:37.521027 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.14s 2026-02-23 20:47:37.521033 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.47s 2026-02-23 20:47:37.521038 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.65s 2026-02-23 20:47:37.521044 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.57s 2026-02-23 20:47:37.521051 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.36s 2026-02-23 20:47:37.521057 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.55s 2026-02-23 20:47:37.521202 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.76s 2026-02-23 20:47:37.521210 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-02-23 20:47:37.521216 | orchestrator | 2026-02-23 20:47:37.521227 | orchestrator | 2026-02-23 20:47:37.521233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:47:37.521239 | orchestrator | 2026-02-23 20:47:37.521246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:47:37.521252 | orchestrator | Monday 23 February 2026 20:45:40 +0000 (0:00:00.636) 0:00:00.636 ******* 2026-02-23 20:47:37.521258 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:37.521264 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:37.521271 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:37.521277 | orchestrator | 2026-02-23 20:47:37.521283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:47:37.521290 | orchestrator | Monday 23 February 2026 20:45:41 +0000 (0:00:00.657) 0:00:01.294 ******* 2026-02-23 20:47:37.521296 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-23 20:47:37.521302 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-23 20:47:37.521309 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-23 20:47:37.521315 | orchestrator | 2026-02-23 20:47:37.521322 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-23 20:47:37.521328 | orchestrator | 2026-02-23 20:47:37.521334 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-23 20:47:37.521340 | orchestrator | Monday 23 February 2026 20:45:42 +0000 (0:00:00.644) 0:00:01.939 ******* 2026-02-23 20:47:37.521350 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:47:37.521357 | orchestrator | 2026-02-23 20:47:37.521363 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-23 20:47:37.521369 | orchestrator | Monday 23 February 2026 20:45:42 +0000 (0:00:00.404) 0:00:02.343 ******* 2026-02-23 20:47:37.521374 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-23 20:47:37.521380 | orchestrator | 2026-02-23 20:47:37.521385 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-23 20:47:37.521391 | orchestrator | Monday 23 February 2026 20:45:46 +0000 (0:00:03.659) 0:00:06.003 ******* 2026-02-23 20:47:37.521397 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-23 20:47:37.521402 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-23 20:47:37.521408 | orchestrator | 2026-02-23 20:47:37.521413 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-23 20:47:37.521419 | orchestrator | Monday 23 February 2026 20:45:52 +0000 (0:00:06.551) 0:00:12.555 ******* 2026-02-23 20:47:37.521424 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:47:37.521430 | orchestrator | 2026-02-23 20:47:37.521436 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-23 20:47:37.521442 | orchestrator | Monday 23 February 2026 20:45:56 +0000 (0:00:03.500) 0:00:16.055 ******* 2026-02-23 20:47:37.521447 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-23 20:47:37.521453 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:47:37.521458 | orchestrator | 2026-02-23 20:47:37.521464 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-23 20:47:37.521469 | orchestrator | Monday 23 February 2026 20:46:00 +0000 (0:00:03.895) 0:00:19.951 ******* 2026-02-23 20:47:37.521475 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:47:37.521480 | orchestrator | 2026-02-23 20:47:37.521486 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-23 20:47:37.521491 | orchestrator | Monday 23 February 2026 20:46:03 +0000 (0:00:03.126) 0:00:23.077 ******* 2026-02-23 20:47:37.521496 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-23 20:47:37.521502 | orchestrator | 2026-02-23 20:47:37.521508 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-23 20:47:37.521517 | orchestrator | Monday 23 February 2026 20:46:07 +0000 (0:00:03.883) 0:00:26.961 ******* 2026-02-23 20:47:37.521522 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.521528 | orchestrator | 2026-02-23 20:47:37.521534 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-23 20:47:37.521544 | orchestrator | Monday 23 February 2026 20:46:10 +0000 (0:00:03.532) 0:00:30.493 ******* 2026-02-23 20:47:37.521550 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.521555 | orchestrator | 2026-02-23 20:47:37.521561 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-23 20:47:37.521566 | orchestrator | Monday 23 February 2026 20:46:13 +0000 (0:00:03.396) 0:00:33.890 ******* 2026-02-23 20:47:37.521572 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.521578 | orchestrator | 2026-02-23 20:47:37.521583 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-23 20:47:37.521588 | orchestrator | Monday 23 February 2026 20:46:16 +0000 (0:00:02.915) 0:00:36.805 ******* 2026-02-23 20:47:37.521595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521645 | orchestrator | 2026-02-23 20:47:37.521651 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-23 20:47:37.521656 | orchestrator | Monday 23 February 2026 20:46:18 +0000 (0:00:01.188) 0:00:37.993 ******* 2026-02-23 20:47:37.521662 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:37.521668 | orchestrator | 2026-02-23 20:47:37.521674 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-23 20:47:37.521679 | orchestrator | Monday 23 February 2026 20:46:18 +0000 (0:00:00.111) 0:00:38.105 ******* 2026-02-23 20:47:37.521684 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:37.521690 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:37.521696 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:37.521701 | orchestrator | 2026-02-23 20:47:37.521707 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-23 20:47:37.521712 | orchestrator | Monday 23 February 2026 20:46:18 +0000 (0:00:00.404) 0:00:38.510 ******* 2026-02-23 20:47:37.521719 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:47:37.521724 | orchestrator | 2026-02-23 20:47:37.521760 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-23 20:47:37.521767 | orchestrator | Monday 23 February 2026 20:46:19 +0000 (0:00:00.843) 0:00:39.353 ******* 2026-02-23 20:47:37.521776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521824 | orchestrator | 2026-02-23 20:47:37.521830 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-23 20:47:37.521836 | orchestrator | Monday 23 February 2026 20:46:22 +0000 (0:00:03.019) 0:00:42.372 ******* 2026-02-23 20:47:37.521841 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:37.521847 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:37.521852 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:37.521858 | orchestrator | 2026-02-23 20:47:37.521863 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-23 20:47:37.521869 | orchestrator | Monday 23 February 2026 20:46:22 +0000 (0:00:00.246) 0:00:42.619 ******* 2026-02-23 20:47:37.521875 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:47:37.521880 | orchestrator | 2026-02-23 20:47:37.521885 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-23 20:47:37.521891 | orchestrator | Monday 23 February 2026 20:46:23 +0000 (0:00:00.599) 0:00:43.218 ******* 2026-02-23 20:47:37.521901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.521924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.521947 | orchestrator | 2026-02-23 20:47:37.521953 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-23 20:47:37.521958 | orchestrator | Monday 23 February 2026 20:46:25 +0000 (0:00:02.177) 0:00:45.396 ******* 2026-02-23 20:47:37.521964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.521970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.521979 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:37.521990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.521996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522002 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:37.522037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522052 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:37.522058 | orchestrator | 2026-02-23 20:47:37.522064 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-23 20:47:37.522070 | orchestrator | Monday 23 February 2026 20:46:26 +0000 (0:00:00.918) 0:00:46.315 ******* 2026-02-23 20:47:37.522078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522094 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:37.522104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522116 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:37.522122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522135 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:37.522141 | orchestrator | 2026-02-23 20:47:37.522146 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-23 20:47:37.522152 | orchestrator | Monday 23 February 2026 20:46:27 +0000 (0:00:01.521) 0:00:47.836 ******* 2026-02-23 20:47:37.522175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522221 | orchestrator | 2026-02-23 20:47:37.522227 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-23 20:47:37.522233 | orchestrator | Monday 23 February 2026 20:46:30 +0000 (0:00:02.500) 0:00:50.337 ******* 2026-02-23 20:47:37.522243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522290 | orchestrator | 2026-02-23 20:47:37.522296 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-23 20:47:37.522302 | orchestrator | Monday 23 February 2026 20:46:38 +0000 (0:00:07.745) 0:00:58.083 ******* 2026-02-23 20:47:37.522308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522323 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:37.522331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522343 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:37.522352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-23 20:47:37.522362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:47:37.522368 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:37.522374 | orchestrator | 2026-02-23 20:47:37.522380 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-23 20:47:37.522385 | orchestrator | Monday 23 February 2026 20:46:39 +0000 (0:00:01.359) 0:00:59.442 ******* 2026-02-23 20:47:37.522394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-23 20:47:37.522415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:47:37.522439 | orchestrator | 2026-02-23 20:47:37.522445 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-23 20:47:37.522450 | orchestrator | Monday 23 February 2026 20:46:42 +0000 (0:00:02.647) 0:01:02.090 ******* 2026-02-23 20:47:37.522456 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:37.522462 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:37.522468 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:37.522473 | orchestrator | 2026-02-23 20:47:37.522479 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-23 20:47:37.522485 | orchestrator | Monday 23 February 2026 20:46:42 +0000 (0:00:00.525) 0:01:02.616 ******* 2026-02-23 20:47:37.522490 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.522496 | orchestrator | 2026-02-23 20:47:37.522502 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-23 20:47:37.522508 | orchestrator | Monday 23 February 2026 20:46:45 +0000 (0:00:02.651) 0:01:05.267 ******* 2026-02-23 20:47:37.522513 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.522519 | orchestrator | 2026-02-23 20:47:37.522524 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-23 20:47:37.522530 | orchestrator | Monday 23 February 2026 20:46:47 +0000 (0:00:02.627) 0:01:07.895 ******* 2026-02-23 20:47:37.522536 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.522542 | orchestrator | 2026-02-23 20:47:37.522547 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-23 20:47:37.522553 | orchestrator | Monday 23 February 2026 20:47:03 +0000 (0:00:15.962) 0:01:23.857 ******* 2026-02-23 20:47:37.522559 | orchestrator | 2026-02-23 20:47:37.522565 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-23 20:47:37.522570 | orchestrator | Monday 23 February 2026 20:47:03 +0000 (0:00:00.060) 0:01:23.917 ******* 2026-02-23 20:47:37.522575 | orchestrator | 2026-02-23 20:47:37.522580 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-23 20:47:37.522591 | orchestrator | Monday 23 February 2026 20:47:04 +0000 (0:00:00.059) 0:01:23.977 ******* 2026-02-23 20:47:37.522596 | orchestrator | 2026-02-23 20:47:37.522602 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-23 20:47:37.522607 | orchestrator | Monday 23 February 2026 20:47:04 +0000 (0:00:00.062) 0:01:24.040 ******* 2026-02-23 20:47:37.522613 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.522619 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:47:37.522625 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:47:37.522630 | orchestrator | 2026-02-23 20:47:37.522637 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-23 20:47:37.522647 | orchestrator | Monday 23 February 2026 20:47:21 +0000 (0:00:17.747) 0:01:41.787 ******* 2026-02-23 20:47:37.522652 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:37.522658 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:47:37.522663 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:47:37.522669 | orchestrator | 2026-02-23 20:47:37.522675 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:47:37.522681 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-23 20:47:37.522687 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:47:37.522693 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:47:37.522698 | orchestrator | 2026-02-23 20:47:37.522704 | orchestrator | 2026-02-23 20:47:37.522710 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:47:37.522716 | orchestrator | Monday 23 February 2026 20:47:36 +0000 (0:00:14.221) 0:01:56.009 ******* 2026-02-23 20:47:37.522721 | orchestrator | =============================================================================== 2026-02-23 20:47:37.522781 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.75s 2026-02-23 20:47:37.522788 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.96s 2026-02-23 20:47:37.522793 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.22s 2026-02-23 20:47:37.522798 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.75s 2026-02-23 20:47:37.522803 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.55s 2026-02-23 20:47:37.522808 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.90s 2026-02-23 20:47:37.522814 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.88s 2026-02-23 20:47:37.522820 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.66s 2026-02-23 20:47:37.522826 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2026-02-23 20:47:37.522831 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.50s 2026-02-23 20:47:37.522836 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.40s 2026-02-23 20:47:37.522841 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.13s 2026-02-23 20:47:37.522846 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.02s 2026-02-23 20:47:37.522851 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 2.92s 2026-02-23 20:47:37.522860 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.65s 2026-02-23 20:47:37.522866 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.65s 2026-02-23 20:47:37.522871 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.63s 2026-02-23 20:47:37.522876 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.50s 2026-02-23 20:47:37.522885 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.18s 2026-02-23 20:47:37.522890 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.52s 2026-02-23 20:47:37.522895 | orchestrator | 2026-02-23 20:47:37 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:37.522900 | orchestrator | 2026-02-23 20:47:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:40.550495 | orchestrator | 2026-02-23 20:47:40 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state STARTED 2026-02-23 20:47:40.550556 | orchestrator | 2026-02-23 20:47:40 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:40.550564 | orchestrator | 2026-02-23 20:47:40 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:40.550570 | orchestrator | 2026-02-23 20:47:40 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:40.550575 | orchestrator | 2026-02-23 20:47:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:43.577628 | orchestrator | 2026-02-23 20:47:43.577680 | orchestrator | 2026-02-23 20:47:43.577694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:47:43.577702 | orchestrator | 2026-02-23 20:47:43.577705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:47:43.577763 | orchestrator | Monday 23 February 2026 20:43:27 +0000 (0:00:00.220) 0:00:00.220 ******* 2026-02-23 20:47:43.577778 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:43.577784 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:43.577790 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:43.577795 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:47:43.577800 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:47:43.577811 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:47:43.577817 | orchestrator | 2026-02-23 20:47:43.577822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:47:43.577828 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.529) 0:00:00.749 ******* 2026-02-23 20:47:43.577833 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-23 20:47:43.577839 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-23 20:47:43.577844 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-23 20:47:43.577850 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-23 20:47:43.577854 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-23 20:47:43.577857 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-23 20:47:43.577860 | orchestrator | 2026-02-23 20:47:43.577864 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-23 20:47:43.577867 | orchestrator | 2026-02-23 20:47:43.577871 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-23 20:47:43.577900 | orchestrator | Monday 23 February 2026 20:43:28 +0000 (0:00:00.594) 0:00:01.344 ******* 2026-02-23 20:47:43.577904 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:47:43.577908 | orchestrator | 2026-02-23 20:47:43.578339 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-23 20:47:43.578367 | orchestrator | Monday 23 February 2026 20:43:29 +0000 (0:00:01.005) 0:00:02.350 ******* 2026-02-23 20:47:43.578371 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:43.578375 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:43.578379 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:43.578384 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:47:43.578389 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:47:43.578396 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:47:43.578403 | orchestrator | 2026-02-23 20:47:43.578408 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-23 20:47:43.578430 | orchestrator | Monday 23 February 2026 20:43:30 +0000 (0:00:01.115) 0:00:03.465 ******* 2026-02-23 20:47:43.578435 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:43.578440 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:43.578445 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:43.578450 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:47:43.578456 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:47:43.578461 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:47:43.578473 | orchestrator | 2026-02-23 20:47:43.578478 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-23 20:47:43.578483 | orchestrator | Monday 23 February 2026 20:43:31 +0000 (0:00:00.945) 0:00:04.411 ******* 2026-02-23 20:47:43.578488 | orchestrator | ok: [testbed-node-0] => { 2026-02-23 20:47:43.578494 | orchestrator |  "changed": false, 2026-02-23 20:47:43.578499 | orchestrator |  "msg": "All assertions passed" 2026-02-23 20:47:43.578503 | orchestrator | } 2026-02-23 20:47:43.578516 | orchestrator | ok: [testbed-node-1] => { 2026-02-23 20:47:43.578529 | orchestrator |  "changed": false, 2026-02-23 20:47:43.578535 | orchestrator |  "msg": "All assertions passed" 2026-02-23 20:47:43.578546 | orchestrator | } 2026-02-23 20:47:43.578551 | orchestrator | ok: [testbed-node-2] => { 2026-02-23 20:47:43.578558 | orchestrator |  "changed": false, 2026-02-23 20:47:43.578563 | orchestrator |  "msg": "All assertions passed" 2026-02-23 20:47:43.578569 | orchestrator | } 2026-02-23 20:47:43.578573 | orchestrator | ok: [testbed-node-3] => { 2026-02-23 20:47:43.578577 | orchestrator |  "changed": false, 2026-02-23 20:47:43.578602 | orchestrator |  "msg": "All assertions passed" 2026-02-23 20:47:43.578606 | orchestrator | } 2026-02-23 20:47:43.578610 | orchestrator | ok: [testbed-node-4] => { 2026-02-23 20:47:43.578613 | orchestrator |  "changed": false, 2026-02-23 20:47:43.578757 | orchestrator |  "msg": "All assertions passed" 2026-02-23 20:47:43.578768 | orchestrator | } 2026-02-23 20:47:43.578773 | orchestrator | ok: [testbed-node-5] => { 2026-02-23 20:47:43.578778 | orchestrator |  "changed": false, 2026-02-23 20:47:43.578783 | orchestrator |  "msg": "All assertions passed" 2026-02-23 20:47:43.578999 | orchestrator | } 2026-02-23 20:47:43.579005 | orchestrator | 2026-02-23 20:47:43.579010 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-23 20:47:43.579016 | orchestrator | Monday 23 February 2026 20:43:32 +0000 (0:00:00.645) 0:00:05.057 ******* 2026-02-23 20:47:43.579021 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579050 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579056 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579062 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579067 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579072 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579077 | orchestrator | 2026-02-23 20:47:43.579082 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-23 20:47:43.579088 | orchestrator | Monday 23 February 2026 20:43:33 +0000 (0:00:00.515) 0:00:05.573 ******* 2026-02-23 20:47:43.579093 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-23 20:47:43.579098 | orchestrator | 2026-02-23 20:47:43.579104 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-23 20:47:43.579109 | orchestrator | Monday 23 February 2026 20:43:37 +0000 (0:00:04.120) 0:00:09.693 ******* 2026-02-23 20:47:43.579114 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-23 20:47:43.579127 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-23 20:47:43.579133 | orchestrator | 2026-02-23 20:47:43.579168 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-23 20:47:43.579174 | orchestrator | Monday 23 February 2026 20:43:43 +0000 (0:00:06.176) 0:00:15.869 ******* 2026-02-23 20:47:43.579179 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:47:43.579194 | orchestrator | 2026-02-23 20:47:43.579199 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-23 20:47:43.579204 | orchestrator | Monday 23 February 2026 20:43:47 +0000 (0:00:03.874) 0:00:19.744 ******* 2026-02-23 20:47:43.579209 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-23 20:47:43.579215 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:47:43.579220 | orchestrator | 2026-02-23 20:47:43.579225 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-23 20:47:43.579230 | orchestrator | Monday 23 February 2026 20:43:51 +0000 (0:00:04.141) 0:00:23.885 ******* 2026-02-23 20:47:43.579235 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:47:43.579241 | orchestrator | 2026-02-23 20:47:43.579246 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-23 20:47:43.579251 | orchestrator | Monday 23 February 2026 20:43:55 +0000 (0:00:03.947) 0:00:27.833 ******* 2026-02-23 20:47:43.579256 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-23 20:47:43.579261 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-23 20:47:43.579266 | orchestrator | 2026-02-23 20:47:43.579272 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-23 20:47:43.579277 | orchestrator | Monday 23 February 2026 20:44:03 +0000 (0:00:08.172) 0:00:36.006 ******* 2026-02-23 20:47:43.579282 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579287 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579292 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579297 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579302 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579307 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579312 | orchestrator | 2026-02-23 20:47:43.579317 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-23 20:47:43.579322 | orchestrator | Monday 23 February 2026 20:44:04 +0000 (0:00:00.634) 0:00:36.640 ******* 2026-02-23 20:47:43.579328 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579333 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579338 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579343 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579348 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579353 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579358 | orchestrator | 2026-02-23 20:47:43.579363 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-23 20:47:43.579368 | orchestrator | Monday 23 February 2026 20:44:06 +0000 (0:00:02.571) 0:00:39.212 ******* 2026-02-23 20:47:43.579376 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:47:43.579380 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:47:43.579385 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:47:43.579390 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:47:43.579395 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:47:43.579400 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:47:43.579404 | orchestrator | 2026-02-23 20:47:43.579409 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-23 20:47:43.579415 | orchestrator | Monday 23 February 2026 20:44:08 +0000 (0:00:01.808) 0:00:41.021 ******* 2026-02-23 20:47:43.579420 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579425 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579430 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579435 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579440 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579445 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579450 | orchestrator | 2026-02-23 20:47:43.579456 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-23 20:47:43.579461 | orchestrator | Monday 23 February 2026 20:44:11 +0000 (0:00:02.626) 0:00:43.648 ******* 2026-02-23 20:47:43.579473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.579501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.579507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.579513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.579518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.579529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.579535 | orchestrator | 2026-02-23 20:47:43.579540 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-23 20:47:43.579545 | orchestrator | Monday 23 February 2026 20:44:14 +0000 (0:00:03.084) 0:00:46.732 ******* 2026-02-23 20:47:43.579550 | orchestrator | [WARNING]: Skipped 2026-02-23 20:47:43.579554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-23 20:47:43.579557 | orchestrator | due to this access issue: 2026-02-23 20:47:43.579561 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-23 20:47:43.579564 | orchestrator | a directory 2026-02-23 20:47:43.579567 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:47:43.579570 | orchestrator | 2026-02-23 20:47:43.579573 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-23 20:47:43.579587 | orchestrator | Monday 23 February 2026 20:44:15 +0000 (0:00:01.530) 0:00:48.263 ******* 2026-02-23 20:47:43.579591 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:47:43.579595 | orchestrator | 2026-02-23 20:47:43.579598 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-23 20:47:43.579601 | orchestrator | Monday 23 February 2026 20:44:16 +0000 (0:00:01.280) 0:00:49.544 ******* 2026-02-23 20:47:43.579605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.579608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.579621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.579627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.579648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.579655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.579661 | orchestrator | 2026-02-23 20:47:43.579665 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-23 20:47:43.579670 | orchestrator | Monday 23 February 2026 20:44:21 +0000 (0:00:04.027) 0:00:53.572 ******* 2026-02-23 20:47:43.579675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579684 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579697 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579735 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.579747 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.579761 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.579771 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579776 | orchestrator | 2026-02-23 20:47:43.579781 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-23 20:47:43.579791 | orchestrator | Monday 23 February 2026 20:44:24 +0000 (0:00:03.498) 0:00:57.070 ******* 2026-02-23 20:47:43.579805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579811 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.579838 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.579847 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579857 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579864 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.579873 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579876 | orchestrator | 2026-02-23 20:47:43.579879 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-23 20:47:43.579887 | orchestrator | Monday 23 February 2026 20:44:27 +0000 (0:00:03.327) 0:01:00.398 ******* 2026-02-23 20:47:43.579890 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579893 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579896 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579899 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579902 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579905 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579908 | orchestrator | 2026-02-23 20:47:43.579912 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-23 20:47:43.579925 | orchestrator | Monday 23 February 2026 20:44:31 +0000 (0:00:03.394) 0:01:03.792 ******* 2026-02-23 20:47:43.579929 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579932 | orchestrator | 2026-02-23 20:47:43.579935 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-23 20:47:43.579938 | orchestrator | Monday 23 February 2026 20:44:31 +0000 (0:00:00.143) 0:01:03.936 ******* 2026-02-23 20:47:43.579941 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579944 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579948 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579951 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.579954 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.579961 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.579965 | orchestrator | 2026-02-23 20:47:43.579968 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-23 20:47:43.579971 | orchestrator | Monday 23 February 2026 20:44:32 +0000 (0:00:00.633) 0:01:04.570 ******* 2026-02-23 20:47:43.579974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579977 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.579981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579984 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.579989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.579992 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.579998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_po2026-02-23 20:47:43 | INFO  | Task e438a3b2-f6a9-494d-8ea7-77e141da8561 is in state SUCCESS 2026-02-23 20:47:43.580005 | orchestrator | rt neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580008 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580014 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580021 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580024 | orchestrator | 2026-02-23 20:47:43.580027 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-23 20:47:43.580030 | orchestrator | Monday 23 February 2026 20:44:34 +0000 (0:00:02.930) 0:01:07.500 ******* 2026-02-23 20:47:43.580033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.580056 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.580059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.580062 | orchestrator | 2026-02-23 20:47:43.580065 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-23 20:47:43.580071 | orchestrator | Monday 23 February 2026 20:44:39 +0000 (0:00:04.406) 0:01:11.906 ******* 2026-02-23 20:47:43.580074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.580083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.580100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.580105 | orchestrator | 2026-02-23 20:47:43.580109 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-23 20:47:43.580112 | orchestrator | Monday 23 February 2026 20:44:45 +0000 (0:00:05.701) 0:01:17.607 ******* 2026-02-23 20:47:43.580118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580121 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580128 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580134 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580159 | orchestrator | 2026-02-23 20:47:43.580162 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-23 20:47:43.580165 | orchestrator | Monday 23 February 2026 20:44:48 +0000 (0:00:03.251) 0:01:20.858 ******* 2026-02-23 20:47:43.580168 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580171 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580174 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580178 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:47:43.580181 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:43.580184 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:47:43.580187 | orchestrator | 2026-02-23 20:47:43.580190 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-23 20:47:43.580193 | orchestrator | Monday 23 February 2026 20:44:51 +0000 (0:00:02.796) 0:01:23.655 ******* 2026-02-23 20:47:43.580196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580200 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580208 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580216 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.580238 | orchestrator | 2026-02-23 20:47:43.580241 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-23 20:47:43.580244 | orchestrator | Monday 23 February 2026 20:44:55 +0000 (0:00:04.260) 0:01:27.915 ******* 2026-02-23 20:47:43.580247 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580253 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580256 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580259 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580263 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580266 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580271 | orchestrator | 2026-02-23 20:47:43.580276 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-23 20:47:43.580284 | orchestrator | Monday 23 February 2026 20:44:57 +0000 (0:00:02.042) 0:01:29.958 ******* 2026-02-23 20:47:43.580292 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580297 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580305 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580311 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580316 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580321 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580326 | orchestrator | 2026-02-23 20:47:43.580331 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-23 20:47:43.580336 | orchestrator | Monday 23 February 2026 20:44:59 +0000 (0:00:02.221) 0:01:32.179 ******* 2026-02-23 20:47:43.580341 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580347 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580352 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580357 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580363 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580368 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580373 | orchestrator | 2026-02-23 20:47:43.580379 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-23 20:47:43.580384 | orchestrator | Monday 23 February 2026 20:45:02 +0000 (0:00:02.489) 0:01:34.669 ******* 2026-02-23 20:47:43.580390 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580394 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580397 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580400 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580403 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580406 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580410 | orchestrator | 2026-02-23 20:47:43.580413 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-23 20:47:43.580416 | orchestrator | Monday 23 February 2026 20:45:04 +0000 (0:00:02.298) 0:01:36.967 ******* 2026-02-23 20:47:43.580419 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580423 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580426 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580429 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580435 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580438 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580441 | orchestrator | 2026-02-23 20:47:43.580445 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-23 20:47:43.580448 | orchestrator | Monday 23 February 2026 20:45:06 +0000 (0:00:02.007) 0:01:38.974 ******* 2026-02-23 20:47:43.580451 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580454 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580457 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580461 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580464 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580467 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580476 | orchestrator | 2026-02-23 20:47:43.580479 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-23 20:47:43.580483 | orchestrator | Monday 23 February 2026 20:45:08 +0000 (0:00:01.929) 0:01:40.904 ******* 2026-02-23 20:47:43.580486 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-23 20:47:43.580489 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580492 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-23 20:47:43.580499 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580502 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-23 20:47:43.580506 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580509 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-23 20:47:43.580512 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580515 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-23 20:47:43.580519 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580522 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-23 20:47:43.580525 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580528 | orchestrator | 2026-02-23 20:47:43.580532 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-23 20:47:43.580535 | orchestrator | Monday 23 February 2026 20:45:10 +0000 (0:00:01.877) 0:01:42.782 ******* 2026-02-23 20:47:43.580538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.580542 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.580552 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.580564 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580571 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580582 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580585 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580588 | orchestrator | 2026-02-23 20:47:43.580592 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-23 20:47:43.580595 | orchestrator | Monday 23 February 2026 20:45:12 +0000 (0:00:01.964) 0:01:44.746 ******* 2026-02-23 20:47:43.580600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.580603 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.580617 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.580624 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580631 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580639 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.580648 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580652 | orchestrator | 2026-02-23 20:47:43.580655 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-23 20:47:43.580658 | orchestrator | Monday 23 February 2026 20:45:14 +0000 (0:00:02.362) 0:01:47.109 ******* 2026-02-23 20:47:43.580661 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580666 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580670 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580673 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580676 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580679 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580683 | orchestrator | 2026-02-23 20:47:43.580686 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-23 20:47:43.580689 | orchestrator | Monday 23 February 2026 20:45:16 +0000 (0:00:01.895) 0:01:49.005 ******* 2026-02-23 20:47:43.580692 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580695 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580699 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580702 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:47:43.580705 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:47:43.580708 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:47:43.580786 | orchestrator | 2026-02-23 20:47:43.580790 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-23 20:47:43.580793 | orchestrator | Monday 23 February 2026 20:45:20 +0000 (0:00:03.689) 0:01:52.695 ******* 2026-02-23 20:47:43.580797 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580800 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580803 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580806 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580809 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580813 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580816 | orchestrator | 2026-02-23 20:47:43.580819 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-23 20:47:43.580822 | orchestrator | Monday 23 February 2026 20:45:22 +0000 (0:00:02.285) 0:01:54.980 ******* 2026-02-23 20:47:43.580825 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580829 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580832 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580835 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580838 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580841 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580845 | orchestrator | 2026-02-23 20:47:43.580848 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-23 20:47:43.580851 | orchestrator | Monday 23 February 2026 20:45:24 +0000 (0:00:02.096) 0:01:57.077 ******* 2026-02-23 20:47:43.580854 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580858 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580861 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580864 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580867 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580870 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580874 | orchestrator | 2026-02-23 20:47:43.580877 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-23 20:47:43.580880 | orchestrator | Monday 23 February 2026 20:45:26 +0000 (0:00:01.980) 0:01:59.057 ******* 2026-02-23 20:47:43.580883 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580887 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580890 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580893 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580896 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580900 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580903 | orchestrator | 2026-02-23 20:47:43.580906 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-23 20:47:43.580914 | orchestrator | Monday 23 February 2026 20:45:28 +0000 (0:00:02.081) 0:02:01.139 ******* 2026-02-23 20:47:43.580917 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580922 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580929 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580936 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.580941 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580946 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.580951 | orchestrator | 2026-02-23 20:47:43.580956 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-23 20:47:43.580968 | orchestrator | Monday 23 February 2026 20:45:31 +0000 (0:00:03.006) 0:02:04.146 ******* 2026-02-23 20:47:43.580972 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.580977 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.580982 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.580987 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.580992 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.581001 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.581006 | orchestrator | 2026-02-23 20:47:43.581011 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-23 20:47:43.581016 | orchestrator | Monday 23 February 2026 20:45:34 +0000 (0:00:03.008) 0:02:07.154 ******* 2026-02-23 20:47:43.581021 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.581027 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.581032 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.581037 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.581042 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.581047 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.581053 | orchestrator | 2026-02-23 20:47:43.581058 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-23 20:47:43.581064 | orchestrator | Monday 23 February 2026 20:45:36 +0000 (0:00:02.163) 0:02:09.317 ******* 2026-02-23 20:47:43.581069 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-23 20:47:43.581075 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.581081 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-23 20:47:43.581086 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.581091 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-23 20:47:43.581097 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.581102 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-23 20:47:43.581108 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.581119 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-23 20:47:43.581128 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.581132 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-23 20:47:43.581138 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.581143 | orchestrator | 2026-02-23 20:47:43.581148 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-23 20:47:43.581153 | orchestrator | Monday 23 February 2026 20:45:39 +0000 (0:00:03.113) 0:02:12.430 ******* 2026-02-23 20:47:43.581158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.581169 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.581175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.581180 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.581189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.581194 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.581200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.581205 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.581217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-23 20:47:43.581226 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.581232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-23 20:47:43.581237 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.581243 | orchestrator | 2026-02-23 20:47:43.581248 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-23 20:47:43.581254 | orchestrator | Monday 23 February 2026 20:45:42 +0000 (0:00:02.335) 0:02:14.766 ******* 2026-02-23 20:47:43.581259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.581268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.581278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-23 20:47:43.581284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.581294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.581300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-23 20:47:43.581304 | orchestrator | 2026-02-23 20:47:43.581307 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-23 20:47:43.581318 | orchestrator | Monday 23 February 2026 20:45:44 +0000 (0:00:02.508) 0:02:17.274 ******* 2026-02-23 20:47:43.581321 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:47:43.581324 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:47:43.581328 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:47:43.581331 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:47:43.581334 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:47:43.581337 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:47:43.581341 | orchestrator | 2026-02-23 20:47:43.581346 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-23 20:47:43.581350 | orchestrator | Monday 23 February 2026 20:45:45 +0000 (0:00:00.594) 0:02:17.869 ******* 2026-02-23 20:47:43.581353 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:43.581356 | orchestrator | 2026-02-23 20:47:43.581360 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-23 20:47:43.581363 | orchestrator | Monday 23 February 2026 20:45:47 +0000 (0:00:01.857) 0:02:19.726 ******* 2026-02-23 20:47:43.581366 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:43.581370 | orchestrator | 2026-02-23 20:47:43.581373 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-23 20:47:43.581376 | orchestrator | Monday 23 February 2026 20:45:49 +0000 (0:00:02.357) 0:02:22.083 ******* 2026-02-23 20:47:43.581379 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:43.581383 | orchestrator | 2026-02-23 20:47:43.581386 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-23 20:47:43.581389 | orchestrator | Monday 23 February 2026 20:46:35 +0000 (0:00:45.817) 0:03:07.901 ******* 2026-02-23 20:47:43.581392 | orchestrator | 2026-02-23 20:47:43.581396 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-23 20:47:43.581402 | orchestrator | Monday 23 February 2026 20:46:35 +0000 (0:00:00.451) 0:03:08.353 ******* 2026-02-23 20:47:43.581405 | orchestrator | 2026-02-23 20:47:43.581408 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-23 20:47:43.581411 | orchestrator | Monday 23 February 2026 20:46:36 +0000 (0:00:00.590) 0:03:08.943 ******* 2026-02-23 20:47:43.581415 | orchestrator | 2026-02-23 20:47:43.581418 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-23 20:47:43.581421 | orchestrator | Monday 23 February 2026 20:46:36 +0000 (0:00:00.058) 0:03:09.001 ******* 2026-02-23 20:47:43.581424 | orchestrator | 2026-02-23 20:47:43.581431 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-23 20:47:43.581434 | orchestrator | Monday 23 February 2026 20:46:36 +0000 (0:00:00.096) 0:03:09.098 ******* 2026-02-23 20:47:43.581437 | orchestrator | 2026-02-23 20:47:43.581440 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-23 20:47:43.581443 | orchestrator | Monday 23 February 2026 20:46:36 +0000 (0:00:00.085) 0:03:09.183 ******* 2026-02-23 20:47:43.581447 | orchestrator | 2026-02-23 20:47:43.581450 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-23 20:47:43.581453 | orchestrator | Monday 23 February 2026 20:46:36 +0000 (0:00:00.072) 0:03:09.256 ******* 2026-02-23 20:47:43.581457 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:47:43.581460 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:47:43.581463 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:47:43.581466 | orchestrator | 2026-02-23 20:47:43.581470 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-23 20:47:43.581473 | orchestrator | Monday 23 February 2026 20:46:59 +0000 (0:00:22.978) 0:03:32.235 ******* 2026-02-23 20:47:43.581481 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:47:43.581484 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:47:43.581487 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:47:43.581490 | orchestrator | 2026-02-23 20:47:43.581494 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:47:43.581497 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-23 20:47:43.581502 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2026-02-23 20:47:43.581505 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2026-02-23 20:47:43.581508 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-23 20:47:43.581512 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-23 20:47:43.581515 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-23 20:47:43.581518 | orchestrator | 2026-02-23 20:47:43.581521 | orchestrator | 2026-02-23 20:47:43.581525 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:47:43.581528 | orchestrator | Monday 23 February 2026 20:47:40 +0000 (0:00:41.042) 0:04:13.277 ******* 2026-02-23 20:47:43.581531 | orchestrator | =============================================================================== 2026-02-23 20:47:43.581534 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.82s 2026-02-23 20:47:43.581538 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 41.04s 2026-02-23 20:47:43.581541 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.98s 2026-02-23 20:47:43.581544 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.17s 2026-02-23 20:47:43.581550 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.18s 2026-02-23 20:47:43.581553 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.70s 2026-02-23 20:47:43.581557 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.41s 2026-02-23 20:47:43.581560 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.26s 2026-02-23 20:47:43.581563 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.14s 2026-02-23 20:47:43.581569 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.12s 2026-02-23 20:47:43.581572 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.03s 2026-02-23 20:47:43.581575 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.95s 2026-02-23 20:47:43.581578 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.87s 2026-02-23 20:47:43.581582 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.69s 2026-02-23 20:47:43.581585 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.50s 2026-02-23 20:47:43.581589 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.39s 2026-02-23 20:47:43.581592 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.33s 2026-02-23 20:47:43.581595 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.25s 2026-02-23 20:47:43.581598 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.11s 2026-02-23 20:47:43.581601 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.08s 2026-02-23 20:47:43.581605 | orchestrator | 2026-02-23 20:47:43 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:43.581608 | orchestrator | 2026-02-23 20:47:43 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:43.581611 | orchestrator | 2026-02-23 20:47:43 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:47:43.581617 | orchestrator | 2026-02-23 20:47:43 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:43.581620 | orchestrator | 2026-02-23 20:47:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:46.602963 | orchestrator | 2026-02-23 20:47:46 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:46.603330 | orchestrator | 2026-02-23 20:47:46 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:46.603978 | orchestrator | 2026-02-23 20:47:46 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:47:46.604493 | orchestrator | 2026-02-23 20:47:46 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:46.604513 | orchestrator | 2026-02-23 20:47:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:49.637765 | orchestrator | 2026-02-23 20:47:49 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:49.639245 | orchestrator | 2026-02-23 20:47:49 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:49.642130 | orchestrator | 2026-02-23 20:47:49 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:47:49.642692 | orchestrator | 2026-02-23 20:47:49 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:49.642737 | orchestrator | 2026-02-23 20:47:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:52.668683 | orchestrator | 2026-02-23 20:47:52 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:52.668760 | orchestrator | 2026-02-23 20:47:52 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:52.668980 | orchestrator | 2026-02-23 20:47:52 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:47:52.670136 | orchestrator | 2026-02-23 20:47:52 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:52.670165 | orchestrator | 2026-02-23 20:47:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:55.715014 | orchestrator | 2026-02-23 20:47:55 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:55.715763 | orchestrator | 2026-02-23 20:47:55 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:55.716509 | orchestrator | 2026-02-23 20:47:55 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:47:55.717432 | orchestrator | 2026-02-23 20:47:55 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:55.717463 | orchestrator | 2026-02-23 20:47:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:47:58.745643 | orchestrator | 2026-02-23 20:47:58 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:47:58.746727 | orchestrator | 2026-02-23 20:47:58 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:47:58.747403 | orchestrator | 2026-02-23 20:47:58 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:47:58.748234 | orchestrator | 2026-02-23 20:47:58 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:47:58.748255 | orchestrator | 2026-02-23 20:47:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:01.784487 | orchestrator | 2026-02-23 20:48:01 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:01.784911 | orchestrator | 2026-02-23 20:48:01 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:01.785727 | orchestrator | 2026-02-23 20:48:01 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:01.787921 | orchestrator | 2026-02-23 20:48:01 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:01.787971 | orchestrator | 2026-02-23 20:48:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:04.830271 | orchestrator | 2026-02-23 20:48:04 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:04.831070 | orchestrator | 2026-02-23 20:48:04 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:04.832092 | orchestrator | 2026-02-23 20:48:04 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:04.833265 | orchestrator | 2026-02-23 20:48:04 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:04.833300 | orchestrator | 2026-02-23 20:48:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:07.858115 | orchestrator | 2026-02-23 20:48:07 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:07.858305 | orchestrator | 2026-02-23 20:48:07 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:07.859083 | orchestrator | 2026-02-23 20:48:07 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:07.859682 | orchestrator | 2026-02-23 20:48:07 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:07.859746 | orchestrator | 2026-02-23 20:48:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:10.885254 | orchestrator | 2026-02-23 20:48:10 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:10.885442 | orchestrator | 2026-02-23 20:48:10 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:10.886153 | orchestrator | 2026-02-23 20:48:10 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:10.886751 | orchestrator | 2026-02-23 20:48:10 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:10.886774 | orchestrator | 2026-02-23 20:48:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:13.910995 | orchestrator | 2026-02-23 20:48:13 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:13.911104 | orchestrator | 2026-02-23 20:48:13 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:13.912549 | orchestrator | 2026-02-23 20:48:13 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:13.913084 | orchestrator | 2026-02-23 20:48:13 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:13.913123 | orchestrator | 2026-02-23 20:48:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:16.976389 | orchestrator | 2026-02-23 20:48:16 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:16.976814 | orchestrator | 2026-02-23 20:48:16 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:16.977657 | orchestrator | 2026-02-23 20:48:16 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:16.980748 | orchestrator | 2026-02-23 20:48:16 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:16.980814 | orchestrator | 2026-02-23 20:48:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:20.016221 | orchestrator | 2026-02-23 20:48:20 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:20.017983 | orchestrator | 2026-02-23 20:48:20 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:20.019495 | orchestrator | 2026-02-23 20:48:20 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:20.023890 | orchestrator | 2026-02-23 20:48:20 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:20.023948 | orchestrator | 2026-02-23 20:48:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:23.056531 | orchestrator | 2026-02-23 20:48:23 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:23.059045 | orchestrator | 2026-02-23 20:48:23 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:23.060969 | orchestrator | 2026-02-23 20:48:23 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:23.062898 | orchestrator | 2026-02-23 20:48:23 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:23.063129 | orchestrator | 2026-02-23 20:48:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:26.097022 | orchestrator | 2026-02-23 20:48:26 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:26.097160 | orchestrator | 2026-02-23 20:48:26 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:26.097732 | orchestrator | 2026-02-23 20:48:26 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:26.098340 | orchestrator | 2026-02-23 20:48:26 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:26.098368 | orchestrator | 2026-02-23 20:48:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:29.125871 | orchestrator | 2026-02-23 20:48:29 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:29.126093 | orchestrator | 2026-02-23 20:48:29 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:29.126814 | orchestrator | 2026-02-23 20:48:29 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:29.127267 | orchestrator | 2026-02-23 20:48:29 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:29.127291 | orchestrator | 2026-02-23 20:48:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:32.155474 | orchestrator | 2026-02-23 20:48:32 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:32.155870 | orchestrator | 2026-02-23 20:48:32 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:32.156558 | orchestrator | 2026-02-23 20:48:32 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:32.157221 | orchestrator | 2026-02-23 20:48:32 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:32.157246 | orchestrator | 2026-02-23 20:48:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:35.197734 | orchestrator | 2026-02-23 20:48:35 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:35.199700 | orchestrator | 2026-02-23 20:48:35 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:35.201629 | orchestrator | 2026-02-23 20:48:35 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:35.203320 | orchestrator | 2026-02-23 20:48:35 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:35.203371 | orchestrator | 2026-02-23 20:48:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:38.306420 | orchestrator | 2026-02-23 20:48:38 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:38.306492 | orchestrator | 2026-02-23 20:48:38 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:38.306669 | orchestrator | 2026-02-23 20:48:38 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:38.307518 | orchestrator | 2026-02-23 20:48:38 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:38.307658 | orchestrator | 2026-02-23 20:48:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:41.336337 | orchestrator | 2026-02-23 20:48:41 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:41.336662 | orchestrator | 2026-02-23 20:48:41 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:41.337382 | orchestrator | 2026-02-23 20:48:41 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:41.338302 | orchestrator | 2026-02-23 20:48:41 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:41.338626 | orchestrator | 2026-02-23 20:48:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:44.369221 | orchestrator | 2026-02-23 20:48:44 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:44.371096 | orchestrator | 2026-02-23 20:48:44 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:44.373162 | orchestrator | 2026-02-23 20:48:44 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:44.375164 | orchestrator | 2026-02-23 20:48:44 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:44.375206 | orchestrator | 2026-02-23 20:48:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:47.415680 | orchestrator | 2026-02-23 20:48:47 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:47.416828 | orchestrator | 2026-02-23 20:48:47 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:47.419780 | orchestrator | 2026-02-23 20:48:47 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:47.420375 | orchestrator | 2026-02-23 20:48:47 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:47.420416 | orchestrator | 2026-02-23 20:48:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:50.452992 | orchestrator | 2026-02-23 20:48:50 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:50.456719 | orchestrator | 2026-02-23 20:48:50 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:50.460763 | orchestrator | 2026-02-23 20:48:50 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:50.463915 | orchestrator | 2026-02-23 20:48:50 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:50.464451 | orchestrator | 2026-02-23 20:48:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:53.508676 | orchestrator | 2026-02-23 20:48:53 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:53.508723 | orchestrator | 2026-02-23 20:48:53 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:53.509213 | orchestrator | 2026-02-23 20:48:53 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:53.510637 | orchestrator | 2026-02-23 20:48:53 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:53.510786 | orchestrator | 2026-02-23 20:48:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:56.541102 | orchestrator | 2026-02-23 20:48:56 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:56.541561 | orchestrator | 2026-02-23 20:48:56 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:56.542302 | orchestrator | 2026-02-23 20:48:56 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:56.543064 | orchestrator | 2026-02-23 20:48:56 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:56.543084 | orchestrator | 2026-02-23 20:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:48:59.584412 | orchestrator | 2026-02-23 20:48:59 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:48:59.585585 | orchestrator | 2026-02-23 20:48:59 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:48:59.587468 | orchestrator | 2026-02-23 20:48:59 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:48:59.588579 | orchestrator | 2026-02-23 20:48:59 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:48:59.588618 | orchestrator | 2026-02-23 20:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:02.636658 | orchestrator | 2026-02-23 20:49:02 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:02.637671 | orchestrator | 2026-02-23 20:49:02 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:49:02.638405 | orchestrator | 2026-02-23 20:49:02 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:02.639165 | orchestrator | 2026-02-23 20:49:02 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:02.639197 | orchestrator | 2026-02-23 20:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:05.700084 | orchestrator | 2026-02-23 20:49:05 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:05.703784 | orchestrator | 2026-02-23 20:49:05 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:49:05.708172 | orchestrator | 2026-02-23 20:49:05 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:05.710376 | orchestrator | 2026-02-23 20:49:05 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:05.710957 | orchestrator | 2026-02-23 20:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:08.766732 | orchestrator | 2026-02-23 20:49:08 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:08.767022 | orchestrator | 2026-02-23 20:49:08 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:49:08.767745 | orchestrator | 2026-02-23 20:49:08 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:08.768712 | orchestrator | 2026-02-23 20:49:08 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:08.768755 | orchestrator | 2026-02-23 20:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:11.797817 | orchestrator | 2026-02-23 20:49:11 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:11.798108 | orchestrator | 2026-02-23 20:49:11 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:49:11.799096 | orchestrator | 2026-02-23 20:49:11 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:11.799859 | orchestrator | 2026-02-23 20:49:11 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:11.799901 | orchestrator | 2026-02-23 20:49:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:14.838685 | orchestrator | 2026-02-23 20:49:14 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:14.838827 | orchestrator | 2026-02-23 20:49:14 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state STARTED 2026-02-23 20:49:14.841858 | orchestrator | 2026-02-23 20:49:14 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:14.842316 | orchestrator | 2026-02-23 20:49:14 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:14.842337 | orchestrator | 2026-02-23 20:49:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:17.871092 | orchestrator | 2026-02-23 20:49:17 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:17.872910 | orchestrator | 2026-02-23 20:49:17 | INFO  | Task b83e4211-0f7c-423b-b2d0-f817ae897d65 is in state SUCCESS 2026-02-23 20:49:17.874249 | orchestrator | 2026-02-23 20:49:17.874328 | orchestrator | 2026-02-23 20:49:17.874335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:49:17.874364 | orchestrator | 2026-02-23 20:49:17.874368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:49:17.874373 | orchestrator | Monday 23 February 2026 20:46:16 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-02-23 20:49:17.874377 | orchestrator | ok: [testbed-manager] 2026-02-23 20:49:17.874383 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:49:17.874387 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:49:17.874391 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:49:17.874453 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:49:17.874458 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:49:17.874463 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:49:17.874466 | orchestrator | 2026-02-23 20:49:17.874470 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:49:17.874475 | orchestrator | Monday 23 February 2026 20:46:17 +0000 (0:00:00.796) 0:00:01.052 ******* 2026-02-23 20:49:17.874501 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874506 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874510 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874514 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874518 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874522 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874526 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-23 20:49:17.874530 | orchestrator | 2026-02-23 20:49:17.874534 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-23 20:49:17.874538 | orchestrator | 2026-02-23 20:49:17.874542 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-23 20:49:17.874546 | orchestrator | Monday 23 February 2026 20:46:18 +0000 (0:00:00.619) 0:00:01.672 ******* 2026-02-23 20:49:17.874552 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:49:17.874558 | orchestrator | 2026-02-23 20:49:17.874577 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-23 20:49:17.874581 | orchestrator | Monday 23 February 2026 20:46:19 +0000 (0:00:01.242) 0:00:02.914 ******* 2026-02-23 20:49:17.874589 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:49:17.874598 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874639 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874832 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:49:17.874848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.874887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874895 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.874961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.874978 | orchestrator | 2026-02-23 20:49:17.874982 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-23 20:49:17.874986 | orchestrator | Monday 23 February 2026 20:46:22 +0000 (0:00:03.127) 0:00:06.042 ******* 2026-02-23 20:49:17.874991 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:49:17.874995 | orchestrator | 2026-02-23 20:49:17.874999 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-23 20:49:17.875003 | orchestrator | Monday 23 February 2026 20:46:23 +0000 (0:00:01.342) 0:00:07.384 ******* 2026-02-23 20:49:17.875011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875016 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:49:17.875024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.875070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875098 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875134 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:49:17.875143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.875160 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.875178 | orchestrator | 2026-02-23 20:49:17.875182 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-23 20:49:17.875187 | orchestrator | Monday 23 February 2026 20:46:29 +0000 (0:00:05.522) 0:00:12.907 ******* 2026-02-23 20:49:17.875194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-23 20:49:17.875204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875314 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-23 20:49:17.875326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875332 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.875339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875453 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.875460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875465 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.875469 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.875805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875854 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.875858 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.875861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875884 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.875888 | orchestrator | 2026-02-23 20:49:17.875892 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-23 20:49:17.875898 | orchestrator | Monday 23 February 2026 20:46:31 +0000 (0:00:01.814) 0:00:14.721 ******* 2026-02-23 20:49:17.875902 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-23 20:49:17.875911 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875915 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875920 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-23 20:49:17.875925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875929 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.875937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875964 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.875968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.875972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.875991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.875995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.876001 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.876010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.876016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.876022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.876028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-23 20:49:17.876046 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.876056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.876062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.876077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876089 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.876094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876100 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.876105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-23 20:49:17.876110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-23 20:49:17.876133 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.876139 | orchestrator | 2026-02-23 20:49:17.876145 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-23 20:49:17.876151 | orchestrator | Monday 23 February 2026 20:46:33 +0000 (0:00:02.661) 0:00:17.382 ******* 2026-02-23 20:49:17.876156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876166 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:49:17.876173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.876225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876529 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876609 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:49:17.876617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.876621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876902 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.876918 | orchestrator | 2026-02-23 20:49:17.876988 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-23 20:49:17.876997 | orchestrator | Monday 23 February 2026 20:46:40 +0000 (0:00:07.119) 0:00:24.502 ******* 2026-02-23 20:49:17.877003 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:49:17.877008 | orchestrator | 2026-02-23 20:49:17.877089 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-23 20:49:17.877117 | orchestrator | Monday 23 February 2026 20:46:42 +0000 (0:00:01.452) 0:00:25.954 ******* 2026-02-23 20:49:17.877130 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877137 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877148 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877154 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877261 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877266 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877283 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877288 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877293 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877303 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877311 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877315 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1106964, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8869045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.877319 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877334 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877338 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877342 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877350 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877358 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877362 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877366 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877380 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877385 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877389 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877397 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877406 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877410 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1107002, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8930287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.877418 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877432 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877436 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877443 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877452 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877459 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877463 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877656 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877745 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877787 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877792 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877797 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877801 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877806 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877903 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877987 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877994 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.877998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878003 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878007 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878056 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878062 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878073 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878078 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1106962, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878082 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878086 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878090 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878110 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878115 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878129 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878133 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878137 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878141 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878145 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878162 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878170 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878181 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878185 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1106993, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878189 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878193 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878211 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878220 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878232 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878236 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878240 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878244 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878262 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878266 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878291 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878295 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878299 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878303 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878318 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878322 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878326 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878332 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1106960, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8854654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878336 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878340 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878344 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878355 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878360 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878363 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878370 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878374 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878378 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878382 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878392 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878397 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878400 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878407 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878411 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878415 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878418 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1106966, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.88753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878428 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878432 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878436 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.878441 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878448 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878452 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.878456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878460 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.878464 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878473 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878495 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.878507 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878513 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.878519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-23 20:49:17.878525 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.878533 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1106981, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.891351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878539 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1106968, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8877783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878546 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1106963, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.886614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878556 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106998, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8922527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878563 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106957, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8848937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878575 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107021, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8949034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878581 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1106996, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8919516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878588 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1106961, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8856504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878598 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1106959, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8851943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878602 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1106970, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8890388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878610 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1106969, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8880677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878614 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107020, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8946362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-23 20:49:17.878618 | orchestrator | 2026-02-23 20:49:17.878623 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-23 20:49:17.878629 | orchestrator | Monday 23 February 2026 20:47:07 +0000 (0:00:25.596) 0:00:51.551 ******* 2026-02-23 20:49:17.878633 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:49:17.878637 | orchestrator | 2026-02-23 20:49:17.878645 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-23 20:49:17.878648 | orchestrator | Monday 23 February 2026 20:47:09 +0000 (0:00:01.301) 0:00:52.852 ******* 2026-02-23 20:49:17.878652 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878662 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878670 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878674 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:49:17.878678 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878685 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878693 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878697 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:49:17.878701 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878708 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878712 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878715 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878719 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:49:17.878723 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878727 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878730 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878740 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878750 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-23 20:49:17.878754 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878761 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878765 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878768 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878772 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-23 20:49:17.878776 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878783 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878791 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878794 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-23 20:49:17.878798 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.878802 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878805 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-23 20:49:17.878809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-23 20:49:17.878813 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-23 20:49:17.878816 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-23 20:49:17.878820 | orchestrator | 2026-02-23 20:49:17.878824 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-23 20:49:17.878828 | orchestrator | Monday 23 February 2026 20:47:10 +0000 (0:00:01.728) 0:00:54.580 ******* 2026-02-23 20:49:17.878831 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-23 20:49:17.878836 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.878840 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-23 20:49:17.878844 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.878847 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-23 20:49:17.878851 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.878855 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-23 20:49:17.878859 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.878862 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-23 20:49:17.878866 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.878870 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-23 20:49:17.878874 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.878877 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-23 20:49:17.878881 | orchestrator | 2026-02-23 20:49:17.878885 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-23 20:49:17.878888 | orchestrator | Monday 23 February 2026 20:47:24 +0000 (0:00:13.131) 0:01:07.712 ******* 2026-02-23 20:49:17.878892 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-23 20:49:17.878900 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.878903 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-23 20:49:17.878907 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.878911 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-23 20:49:17.878919 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.878923 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-23 20:49:17.878926 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.878930 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-23 20:49:17.878934 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.878938 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-23 20:49:17.878941 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.878945 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-23 20:49:17.878949 | orchestrator | 2026-02-23 20:49:17.878953 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-23 20:49:17.878956 | orchestrator | Monday 23 February 2026 20:47:27 +0000 (0:00:03.642) 0:01:11.354 ******* 2026-02-23 20:49:17.878960 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-23 20:49:17.878965 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-23 20:49:17.878969 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-23 20:49:17.878973 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.878976 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.878983 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.878987 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-23 20:49:17.878990 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-23 20:49:17.878994 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.878998 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-23 20:49:17.879002 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879005 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-23 20:49:17.879009 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879013 | orchestrator | 2026-02-23 20:49:17.879017 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-23 20:49:17.879020 | orchestrator | Monday 23 February 2026 20:47:29 +0000 (0:00:02.003) 0:01:13.357 ******* 2026-02-23 20:49:17.879024 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:49:17.879028 | orchestrator | 2026-02-23 20:49:17.879032 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-23 20:49:17.879035 | orchestrator | Monday 23 February 2026 20:47:30 +0000 (0:00:00.802) 0:01:14.160 ******* 2026-02-23 20:49:17.879039 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.879043 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.879047 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.879050 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.879054 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.879058 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879061 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879065 | orchestrator | 2026-02-23 20:49:17.879069 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-23 20:49:17.879072 | orchestrator | Monday 23 February 2026 20:47:31 +0000 (0:00:00.585) 0:01:14.745 ******* 2026-02-23 20:49:17.879076 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.879080 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.879087 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879091 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879095 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:17.879098 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:17.879102 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:17.879106 | orchestrator | 2026-02-23 20:49:17.879109 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-23 20:49:17.879113 | orchestrator | Monday 23 February 2026 20:47:33 +0000 (0:00:02.189) 0:01:16.934 ******* 2026-02-23 20:49:17.879117 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879121 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.879124 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879128 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.879132 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879136 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.879139 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879143 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.879150 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879154 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.879158 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879162 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879165 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-23 20:49:17.879169 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879173 | orchestrator | 2026-02-23 20:49:17.879177 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-23 20:49:17.879180 | orchestrator | Monday 23 February 2026 20:47:34 +0000 (0:00:01.436) 0:01:18.370 ******* 2026-02-23 20:49:17.879184 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-23 20:49:17.879188 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-23 20:49:17.879192 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.879196 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.879199 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-23 20:49:17.879203 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.879207 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-23 20:49:17.879211 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.879214 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-23 20:49:17.879218 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879222 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-23 20:49:17.879226 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-23 20:49:17.879232 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879236 | orchestrator | 2026-02-23 20:49:17.879240 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-23 20:49:17.879243 | orchestrator | Monday 23 February 2026 20:47:36 +0000 (0:00:01.494) 0:01:19.865 ******* 2026-02-23 20:49:17.879247 | orchestrator | [WARNING]: Skipped 2026-02-23 20:49:17.879251 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-23 20:49:17.879258 | orchestrator | due to this access issue: 2026-02-23 20:49:17.879262 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-23 20:49:17.879266 | orchestrator | not a directory 2026-02-23 20:49:17.879269 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-23 20:49:17.879273 | orchestrator | 2026-02-23 20:49:17.879277 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-23 20:49:17.879281 | orchestrator | Monday 23 February 2026 20:47:37 +0000 (0:00:01.338) 0:01:21.203 ******* 2026-02-23 20:49:17.879284 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.879288 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.879292 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.879295 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.879299 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.879303 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879306 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879310 | orchestrator | 2026-02-23 20:49:17.879314 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-23 20:49:17.879318 | orchestrator | Monday 23 February 2026 20:47:38 +0000 (0:00:01.134) 0:01:22.338 ******* 2026-02-23 20:49:17.879321 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.879325 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:17.879329 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:17.879332 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:17.879336 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:49:17.879340 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:49:17.879344 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:49:17.879347 | orchestrator | 2026-02-23 20:49:17.879351 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-23 20:49:17.879355 | orchestrator | Monday 23 February 2026 20:47:39 +0000 (0:00:00.748) 0:01:23.086 ******* 2026-02-23 20:49:17.879359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879371 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-23 20:49:17.879379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879418 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-23 20:49:17.879441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879469 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-23 20:49:17.879556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-23 20:49:17.879584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-23 20:49:17.879599 | orchestrator | 2026-02-23 20:49:17.879603 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-23 20:49:17.879607 | orchestrator | Monday 23 February 2026 20:47:43 +0000 (0:00:04.357) 0:01:27.443 ******* 2026-02-23 20:49:17.879611 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-23 20:49:17.879615 | orchestrator | skipping: [testbed-manager] 2026-02-23 20:49:17.879619 | orchestrator | 2026-02-23 20:49:17.879622 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879626 | orchestrator | Monday 23 February 2026 20:47:44 +0000 (0:00:01.118) 0:01:28.562 ******* 2026-02-23 20:49:17.879630 | orchestrator | 2026-02-23 20:49:17.879634 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879638 | orchestrator | Monday 23 February 2026 20:47:44 +0000 (0:00:00.062) 0:01:28.625 ******* 2026-02-23 20:49:17.879641 | orchestrator | 2026-02-23 20:49:17.879645 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879649 | orchestrator | Monday 23 February 2026 20:47:45 +0000 (0:00:00.087) 0:01:28.712 ******* 2026-02-23 20:49:17.879653 | orchestrator | 2026-02-23 20:49:17.879657 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879660 | orchestrator | Monday 23 February 2026 20:47:45 +0000 (0:00:00.063) 0:01:28.776 ******* 2026-02-23 20:49:17.879664 | orchestrator | 2026-02-23 20:49:17.879668 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879672 | orchestrator | Monday 23 February 2026 20:47:45 +0000 (0:00:00.226) 0:01:29.003 ******* 2026-02-23 20:49:17.879676 | orchestrator | 2026-02-23 20:49:17.879679 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879683 | orchestrator | Monday 23 February 2026 20:47:45 +0000 (0:00:00.062) 0:01:29.066 ******* 2026-02-23 20:49:17.879687 | orchestrator | 2026-02-23 20:49:17.879691 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-23 20:49:17.879699 | orchestrator | Monday 23 February 2026 20:47:45 +0000 (0:00:00.073) 0:01:29.140 ******* 2026-02-23 20:49:17.879702 | orchestrator | 2026-02-23 20:49:17.879706 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-23 20:49:17.879710 | orchestrator | Monday 23 February 2026 20:47:45 +0000 (0:00:00.170) 0:01:29.310 ******* 2026-02-23 20:49:17.879714 | orchestrator | changed: [testbed-manager] 2026-02-23 20:49:17.879717 | orchestrator | 2026-02-23 20:49:17.879721 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-23 20:49:17.879727 | orchestrator | Monday 23 February 2026 20:47:59 +0000 (0:00:13.863) 0:01:43.174 ******* 2026-02-23 20:49:17.879731 | orchestrator | changed: [testbed-manager] 2026-02-23 20:49:17.879735 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:17.879739 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:49:17.879743 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:49:17.879746 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:49:17.879750 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:17.879754 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:17.879758 | orchestrator | 2026-02-23 20:49:17.879762 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-23 20:49:17.879766 | orchestrator | Monday 23 February 2026 20:48:14 +0000 (0:00:14.813) 0:01:57.987 ******* 2026-02-23 20:49:17.879769 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:17.879773 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:17.879777 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:17.879781 | orchestrator | 2026-02-23 20:49:17.879785 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-23 20:49:17.879789 | orchestrator | Monday 23 February 2026 20:48:25 +0000 (0:00:11.470) 0:02:09.458 ******* 2026-02-23 20:49:17.879793 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:17.879796 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:17.879800 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:17.879804 | orchestrator | 2026-02-23 20:49:17.879808 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-23 20:49:17.879812 | orchestrator | Monday 23 February 2026 20:48:30 +0000 (0:00:04.912) 0:02:14.370 ******* 2026-02-23 20:49:17.879815 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:49:17.879819 | orchestrator | changed: [testbed-manager] 2026-02-23 20:49:17.879823 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:49:17.879827 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:49:17.879831 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:17.879834 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:17.879838 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:17.879842 | orchestrator | 2026-02-23 20:49:17.879846 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-23 20:49:17.879849 | orchestrator | Monday 23 February 2026 20:48:43 +0000 (0:00:12.561) 0:02:26.932 ******* 2026-02-23 20:49:17.879853 | orchestrator | changed: [testbed-manager] 2026-02-23 20:49:17.879857 | orchestrator | 2026-02-23 20:49:17.879863 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-23 20:49:17.879867 | orchestrator | Monday 23 February 2026 20:48:55 +0000 (0:00:12.033) 0:02:38.965 ******* 2026-02-23 20:49:17.879871 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:17.879875 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:17.879879 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:17.879882 | orchestrator | 2026-02-23 20:49:17.879886 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-23 20:49:17.879890 | orchestrator | Monday 23 February 2026 20:48:59 +0000 (0:00:04.411) 0:02:43.377 ******* 2026-02-23 20:49:17.879894 | orchestrator | changed: [testbed-manager] 2026-02-23 20:49:17.879897 | orchestrator | 2026-02-23 20:49:17.879901 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-23 20:49:17.879905 | orchestrator | Monday 23 February 2026 20:49:09 +0000 (0:00:10.027) 0:02:53.405 ******* 2026-02-23 20:49:17.879912 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:49:17.879916 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:49:17.879920 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:49:17.879924 | orchestrator | 2026-02-23 20:49:17.879928 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:49:17.879932 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-23 20:49:17.879938 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-23 20:49:17.879942 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-23 20:49:17.879946 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-23 20:49:17.879950 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-23 20:49:17.879953 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-23 20:49:17.879957 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-23 20:49:17.879961 | orchestrator | 2026-02-23 20:49:17.879965 | orchestrator | 2026-02-23 20:49:17.879969 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:49:17.879973 | orchestrator | Monday 23 February 2026 20:49:15 +0000 (0:00:05.708) 0:02:59.113 ******* 2026-02-23 20:49:17.879977 | orchestrator | =============================================================================== 2026-02-23 20:49:17.879980 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.60s 2026-02-23 20:49:17.879984 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.81s 2026-02-23 20:49:17.879988 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.86s 2026-02-23 20:49:17.879992 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.13s 2026-02-23 20:49:17.879995 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.56s 2026-02-23 20:49:17.880001 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.03s 2026-02-23 20:49:17.880005 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.47s 2026-02-23 20:49:17.880009 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.03s 2026-02-23 20:49:17.880013 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.12s 2026-02-23 20:49:17.880016 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.70s 2026-02-23 20:49:17.880020 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.52s 2026-02-23 20:49:17.880024 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 4.91s 2026-02-23 20:49:17.880028 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.41s 2026-02-23 20:49:17.880032 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.36s 2026-02-23 20:49:17.880035 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.64s 2026-02-23 20:49:17.880039 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.13s 2026-02-23 20:49:17.880043 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.66s 2026-02-23 20:49:17.880047 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.19s 2026-02-23 20:49:17.880054 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.00s 2026-02-23 20:49:17.880058 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.81s 2026-02-23 20:49:17.880062 | orchestrator | 2026-02-23 20:49:17 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:17.880066 | orchestrator | 2026-02-23 20:49:17 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:17.880072 | orchestrator | 2026-02-23 20:49:17 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:17.880076 | orchestrator | 2026-02-23 20:49:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:20.904789 | orchestrator | 2026-02-23 20:49:20 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:20.904912 | orchestrator | 2026-02-23 20:49:20 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:20.905681 | orchestrator | 2026-02-23 20:49:20 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:20.906747 | orchestrator | 2026-02-23 20:49:20 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:20.906780 | orchestrator | 2026-02-23 20:49:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:23.929893 | orchestrator | 2026-02-23 20:49:23 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:23.930056 | orchestrator | 2026-02-23 20:49:23 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:23.930080 | orchestrator | 2026-02-23 20:49:23 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:23.930593 | orchestrator | 2026-02-23 20:49:23 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:23.930614 | orchestrator | 2026-02-23 20:49:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:26.961608 | orchestrator | 2026-02-23 20:49:26 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:26.964694 | orchestrator | 2026-02-23 20:49:26 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:26.966750 | orchestrator | 2026-02-23 20:49:26 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:26.969397 | orchestrator | 2026-02-23 20:49:26 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:26.969488 | orchestrator | 2026-02-23 20:49:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:30.009290 | orchestrator | 2026-02-23 20:49:30 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:30.011729 | orchestrator | 2026-02-23 20:49:30 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:30.012849 | orchestrator | 2026-02-23 20:49:30 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:30.014053 | orchestrator | 2026-02-23 20:49:30 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:30.014094 | orchestrator | 2026-02-23 20:49:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:33.058293 | orchestrator | 2026-02-23 20:49:33 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:33.059814 | orchestrator | 2026-02-23 20:49:33 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:33.061790 | orchestrator | 2026-02-23 20:49:33 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:33.067108 | orchestrator | 2026-02-23 20:49:33 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:33.067171 | orchestrator | 2026-02-23 20:49:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:36.103290 | orchestrator | 2026-02-23 20:49:36 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:36.105412 | orchestrator | 2026-02-23 20:49:36 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:36.107247 | orchestrator | 2026-02-23 20:49:36 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:36.110215 | orchestrator | 2026-02-23 20:49:36 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:36.110413 | orchestrator | 2026-02-23 20:49:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:39.158194 | orchestrator | 2026-02-23 20:49:39 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state STARTED 2026-02-23 20:49:39.160629 | orchestrator | 2026-02-23 20:49:39 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:39.162670 | orchestrator | 2026-02-23 20:49:39 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:39.164298 | orchestrator | 2026-02-23 20:49:39 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:39.164397 | orchestrator | 2026-02-23 20:49:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:42.206698 | orchestrator | 2026-02-23 20:49:42.206745 | orchestrator | 2026-02-23 20:49:42.206751 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:49:42.206755 | orchestrator | 2026-02-23 20:49:42.206759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:49:42.206763 | orchestrator | Monday 23 February 2026 20:46:57 +0000 (0:00:00.200) 0:00:00.200 ******* 2026-02-23 20:49:42.206767 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:49:42.206771 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:49:42.206775 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:49:42.206778 | orchestrator | 2026-02-23 20:49:42.206782 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:49:42.206786 | orchestrator | Monday 23 February 2026 20:46:58 +0000 (0:00:00.261) 0:00:00.462 ******* 2026-02-23 20:49:42.206789 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-23 20:49:42.206842 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-23 20:49:42.206847 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-23 20:49:42.206851 | orchestrator | 2026-02-23 20:49:42.206854 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-23 20:49:42.206858 | orchestrator | 2026-02-23 20:49:42.206861 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-23 20:49:42.206865 | orchestrator | Monday 23 February 2026 20:46:58 +0000 (0:00:00.345) 0:00:00.807 ******* 2026-02-23 20:49:42.206868 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:49:42.206872 | orchestrator | 2026-02-23 20:49:42.206876 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-23 20:49:42.206879 | orchestrator | Monday 23 February 2026 20:46:58 +0000 (0:00:00.493) 0:00:01.301 ******* 2026-02-23 20:49:42.206882 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-23 20:49:42.206886 | orchestrator | 2026-02-23 20:49:42.206889 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-23 20:49:42.206893 | orchestrator | Monday 23 February 2026 20:47:02 +0000 (0:00:03.506) 0:00:04.808 ******* 2026-02-23 20:49:42.206896 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-23 20:49:42.206911 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-23 20:49:42.206914 | orchestrator | 2026-02-23 20:49:42.206918 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-23 20:49:42.206921 | orchestrator | Monday 23 February 2026 20:47:08 +0000 (0:00:06.343) 0:00:11.152 ******* 2026-02-23 20:49:42.206925 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:49:42.206929 | orchestrator | 2026-02-23 20:49:42.206932 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-23 20:49:42.206936 | orchestrator | Monday 23 February 2026 20:47:11 +0000 (0:00:03.167) 0:00:14.319 ******* 2026-02-23 20:49:42.206939 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-23 20:49:42.206943 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:49:42.206946 | orchestrator | 2026-02-23 20:49:42.206950 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-23 20:49:42.206953 | orchestrator | Monday 23 February 2026 20:47:15 +0000 (0:00:03.554) 0:00:17.873 ******* 2026-02-23 20:49:42.206957 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:49:42.206960 | orchestrator | 2026-02-23 20:49:42.206963 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-23 20:49:42.206967 | orchestrator | Monday 23 February 2026 20:47:18 +0000 (0:00:02.825) 0:00:20.699 ******* 2026-02-23 20:49:42.206970 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-23 20:49:42.206974 | orchestrator | 2026-02-23 20:49:42.206977 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-23 20:49:42.206981 | orchestrator | Monday 23 February 2026 20:47:21 +0000 (0:00:03.284) 0:00:23.984 ******* 2026-02-23 20:49:42.206999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207016 | orchestrator | 2026-02-23 20:49:42.207020 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-23 20:49:42.207023 | orchestrator | Monday 23 February 2026 20:47:27 +0000 (0:00:05.436) 0:00:29.420 ******* 2026-02-23 20:49:42.207027 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:49:42.207030 | orchestrator | 2026-02-23 20:49:42.207036 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-23 20:49:42.207043 | orchestrator | Monday 23 February 2026 20:47:27 +0000 (0:00:00.489) 0:00:29.909 ******* 2026-02-23 20:49:42.207046 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:42.207050 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207053 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:42.207056 | orchestrator | 2026-02-23 20:49:42.207060 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-23 20:49:42.207063 | orchestrator | Monday 23 February 2026 20:47:31 +0000 (0:00:03.845) 0:00:33.755 ******* 2026-02-23 20:49:42.207067 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:49:42.207074 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:49:42.207077 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:49:42.207081 | orchestrator | 2026-02-23 20:49:42.207084 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-23 20:49:42.207088 | orchestrator | Monday 23 February 2026 20:47:33 +0000 (0:00:01.738) 0:00:35.493 ******* 2026-02-23 20:49:42.207091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:49:42.207095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:49:42.207098 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:49:42.207102 | orchestrator | 2026-02-23 20:49:42.207105 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-23 20:49:42.207109 | orchestrator | Monday 23 February 2026 20:47:34 +0000 (0:00:00.996) 0:00:36.490 ******* 2026-02-23 20:49:42.207112 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:49:42.207116 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:49:42.207119 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:49:42.207123 | orchestrator | 2026-02-23 20:49:42.207126 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-23 20:49:42.207130 | orchestrator | Monday 23 February 2026 20:47:34 +0000 (0:00:00.725) 0:00:37.216 ******* 2026-02-23 20:49:42.207133 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207137 | orchestrator | 2026-02-23 20:49:42.207140 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-23 20:49:42.207144 | orchestrator | Monday 23 February 2026 20:47:34 +0000 (0:00:00.118) 0:00:37.334 ******* 2026-02-23 20:49:42.207147 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207151 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207154 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207188 | orchestrator | 2026-02-23 20:49:42.207191 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-23 20:49:42.207195 | orchestrator | Monday 23 February 2026 20:47:35 +0000 (0:00:00.627) 0:00:37.962 ******* 2026-02-23 20:49:42.207198 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:49:42.207202 | orchestrator | 2026-02-23 20:49:42.207205 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-23 20:49:42.207209 | orchestrator | Monday 23 February 2026 20:47:36 +0000 (0:00:00.549) 0:00:38.511 ******* 2026-02-23 20:49:42.207213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207233 | orchestrator | 2026-02-23 20:49:42.207237 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-23 20:49:42.207241 | orchestrator | Monday 23 February 2026 20:47:40 +0000 (0:00:04.544) 0:00:43.056 ******* 2026-02-23 20:49:42.207253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:49:42.207257 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:49:42.207265 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:49:42.207280 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207284 | orchestrator | 2026-02-23 20:49:42.207287 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-23 20:49:42.207291 | orchestrator | Monday 23 February 2026 20:47:44 +0000 (0:00:04.192) 0:00:47.248 ******* 2026-02-23 20:49:42.207295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:49:42.207299 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:49:42.207308 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-23 20:49:42.207320 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207323 | orchestrator | 2026-02-23 20:49:42.207327 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-23 20:49:42.207330 | orchestrator | Monday 23 February 2026 20:47:47 +0000 (0:00:03.074) 0:00:50.322 ******* 2026-02-23 20:49:42.207334 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207337 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207341 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207344 | orchestrator | 2026-02-23 20:49:42.207348 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-23 20:49:42.207351 | orchestrator | Monday 23 February 2026 20:47:51 +0000 (0:00:03.663) 0:00:53.986 ******* 2026-02-23 20:49:42.207355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207380 | orchestrator | 2026-02-23 20:49:42.207383 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-23 20:49:42.207387 | orchestrator | Monday 23 February 2026 20:47:55 +0000 (0:00:04.040) 0:00:58.026 ******* 2026-02-23 20:49:42.207390 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:42.207394 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207397 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:42.207400 | orchestrator | 2026-02-23 20:49:42.207404 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-23 20:49:42.207407 | orchestrator | Monday 23 February 2026 20:48:02 +0000 (0:00:06.667) 0:01:04.694 ******* 2026-02-23 20:49:42.207411 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207414 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207418 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207435 | orchestrator | 2026-02-23 20:49:42.207440 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-23 20:49:42.207445 | orchestrator | Monday 23 February 2026 20:48:08 +0000 (0:00:05.807) 0:01:10.501 ******* 2026-02-23 20:49:42.207450 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207455 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207461 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207466 | orchestrator | 2026-02-23 20:49:42.207469 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-23 20:49:42.207473 | orchestrator | Monday 23 February 2026 20:48:11 +0000 (0:00:03.014) 0:01:13.516 ******* 2026-02-23 20:49:42.207478 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207484 | orchestrator | s2026-02-23 20:49:42 | INFO  | Task c11ce0b5-8996-44dc-9d38-bc056cbc672a is in state SUCCESS 2026-02-23 20:49:42.207488 | orchestrator | kipping: [testbed-node-2] 2026-02-23 20:49:42.207492 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207495 | orchestrator | 2026-02-23 20:49:42.207499 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-23 20:49:42.207502 | orchestrator | Monday 23 February 2026 20:48:14 +0000 (0:00:03.233) 0:01:16.749 ******* 2026-02-23 20:49:42.207506 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207509 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207512 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207516 | orchestrator | 2026-02-23 20:49:42.207519 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-23 20:49:42.207523 | orchestrator | Monday 23 February 2026 20:48:18 +0000 (0:00:04.520) 0:01:21.270 ******* 2026-02-23 20:49:42.207526 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207530 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207533 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207536 | orchestrator | 2026-02-23 20:49:42.207540 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-23 20:49:42.207543 | orchestrator | Monday 23 February 2026 20:48:19 +0000 (0:00:00.287) 0:01:21.557 ******* 2026-02-23 20:49:42.207547 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-23 20:49:42.207550 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207554 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-23 20:49:42.207557 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207561 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-23 20:49:42.207564 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207570 | orchestrator | 2026-02-23 20:49:42.207574 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-23 20:49:42.207578 | orchestrator | Monday 23 February 2026 20:48:22 +0000 (0:00:03.347) 0:01:24.905 ******* 2026-02-23 20:49:42.207583 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207588 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:42.207593 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:42.207597 | orchestrator | 2026-02-23 20:49:42.207602 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-23 20:49:42.207608 | orchestrator | Monday 23 February 2026 20:48:26 +0000 (0:00:03.845) 0:01:28.750 ******* 2026-02-23 20:49:42.207614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-23 20:49:42.207641 | orchestrator | 2026-02-23 20:49:42.207645 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-23 20:49:42.207648 | orchestrator | Monday 23 February 2026 20:48:30 +0000 (0:00:03.887) 0:01:32.638 ******* 2026-02-23 20:49:42.207651 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:49:42.207655 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:49:42.207658 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:49:42.207662 | orchestrator | 2026-02-23 20:49:42.207665 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-23 20:49:42.207669 | orchestrator | Monday 23 February 2026 20:48:30 +0000 (0:00:00.289) 0:01:32.927 ******* 2026-02-23 20:49:42.207672 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207676 | orchestrator | 2026-02-23 20:49:42.207679 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-23 20:49:42.207683 | orchestrator | Monday 23 February 2026 20:48:32 +0000 (0:00:02.051) 0:01:34.978 ******* 2026-02-23 20:49:42.207686 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207690 | orchestrator | 2026-02-23 20:49:42.207693 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-23 20:49:42.207696 | orchestrator | Monday 23 February 2026 20:48:35 +0000 (0:00:02.462) 0:01:37.440 ******* 2026-02-23 20:49:42.207700 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207703 | orchestrator | 2026-02-23 20:49:42.207707 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-23 20:49:42.207710 | orchestrator | Monday 23 February 2026 20:48:37 +0000 (0:00:01.941) 0:01:39.382 ******* 2026-02-23 20:49:42.207713 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207717 | orchestrator | 2026-02-23 20:49:42.207720 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-23 20:49:42.207723 | orchestrator | Monday 23 February 2026 20:49:03 +0000 (0:00:26.181) 0:02:05.563 ******* 2026-02-23 20:49:42.207727 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207730 | orchestrator | 2026-02-23 20:49:42.207736 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-23 20:49:42.207741 | orchestrator | Monday 23 February 2026 20:49:05 +0000 (0:00:02.437) 0:02:08.001 ******* 2026-02-23 20:49:42.207745 | orchestrator | 2026-02-23 20:49:42.207748 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-23 20:49:42.207754 | orchestrator | Monday 23 February 2026 20:49:05 +0000 (0:00:00.063) 0:02:08.065 ******* 2026-02-23 20:49:42.207757 | orchestrator | 2026-02-23 20:49:42.207761 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-23 20:49:42.207764 | orchestrator | Monday 23 February 2026 20:49:05 +0000 (0:00:00.064) 0:02:08.129 ******* 2026-02-23 20:49:42.207768 | orchestrator | 2026-02-23 20:49:42.207771 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-23 20:49:42.207775 | orchestrator | Monday 23 February 2026 20:49:05 +0000 (0:00:00.068) 0:02:08.197 ******* 2026-02-23 20:49:42.207778 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:49:42.207782 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:49:42.207785 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:49:42.207789 | orchestrator | 2026-02-23 20:49:42.207792 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:49:42.207797 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-23 20:49:42.207801 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-23 20:49:42.207805 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-23 20:49:42.207808 | orchestrator | 2026-02-23 20:49:42.207812 | orchestrator | 2026-02-23 20:49:42.207815 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:49:42.207819 | orchestrator | Monday 23 February 2026 20:49:40 +0000 (0:00:34.160) 0:02:42.358 ******* 2026-02-23 20:49:42.207822 | orchestrator | =============================================================================== 2026-02-23 20:49:42.207826 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.16s 2026-02-23 20:49:42.207829 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.18s 2026-02-23 20:49:42.207833 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.67s 2026-02-23 20:49:42.207836 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.34s 2026-02-23 20:49:42.207840 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.81s 2026-02-23 20:49:42.207843 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.44s 2026-02-23 20:49:42.207847 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.55s 2026-02-23 20:49:42.207850 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.52s 2026-02-23 20:49:42.207854 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.19s 2026-02-23 20:49:42.207857 | orchestrator | glance : Copying over config.json files for services -------------------- 4.04s 2026-02-23 20:49:42.207861 | orchestrator | glance : Check glance containers ---------------------------------------- 3.89s 2026-02-23 20:49:42.207864 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.85s 2026-02-23 20:49:42.207868 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.85s 2026-02-23 20:49:42.207871 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.66s 2026-02-23 20:49:42.207874 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.55s 2026-02-23 20:49:42.207878 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.51s 2026-02-23 20:49:42.207881 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.35s 2026-02-23 20:49:42.207885 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.28s 2026-02-23 20:49:42.207888 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.23s 2026-02-23 20:49:42.207892 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.17s 2026-02-23 20:49:42.207897 | orchestrator | 2026-02-23 20:49:42 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:49:42.209345 | orchestrator | 2026-02-23 20:49:42 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:42.212035 | orchestrator | 2026-02-23 20:49:42 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:42.214442 | orchestrator | 2026-02-23 20:49:42 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:42.214640 | orchestrator | 2026-02-23 20:49:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:45.269723 | orchestrator | 2026-02-23 20:49:45 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:49:45.270161 | orchestrator | 2026-02-23 20:49:45 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:45.272005 | orchestrator | 2026-02-23 20:49:45 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:45.274159 | orchestrator | 2026-02-23 20:49:45 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:45.274202 | orchestrator | 2026-02-23 20:49:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:48.317891 | orchestrator | 2026-02-23 20:49:48 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:49:48.318925 | orchestrator | 2026-02-23 20:49:48 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:48.319046 | orchestrator | 2026-02-23 20:49:48 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:48.320564 | orchestrator | 2026-02-23 20:49:48 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:48.320911 | orchestrator | 2026-02-23 20:49:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:51.363272 | orchestrator | 2026-02-23 20:49:51 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:49:51.363528 | orchestrator | 2026-02-23 20:49:51 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:51.364317 | orchestrator | 2026-02-23 20:49:51 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:51.365012 | orchestrator | 2026-02-23 20:49:51 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:51.365036 | orchestrator | 2026-02-23 20:49:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:54.400236 | orchestrator | 2026-02-23 20:49:54 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:49:54.401262 | orchestrator | 2026-02-23 20:49:54 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:54.402069 | orchestrator | 2026-02-23 20:49:54 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:54.402917 | orchestrator | 2026-02-23 20:49:54 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:54.403123 | orchestrator | 2026-02-23 20:49:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:49:57.447249 | orchestrator | 2026-02-23 20:49:57 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:49:57.449770 | orchestrator | 2026-02-23 20:49:57 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:49:57.451011 | orchestrator | 2026-02-23 20:49:57 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:49:57.452085 | orchestrator | 2026-02-23 20:49:57 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:49:57.452119 | orchestrator | 2026-02-23 20:49:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:00.494580 | orchestrator | 2026-02-23 20:50:00 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:00.494913 | orchestrator | 2026-02-23 20:50:00 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:00.496239 | orchestrator | 2026-02-23 20:50:00 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:00.497490 | orchestrator | 2026-02-23 20:50:00 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:50:00.497541 | orchestrator | 2026-02-23 20:50:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:03.546846 | orchestrator | 2026-02-23 20:50:03 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:03.548637 | orchestrator | 2026-02-23 20:50:03 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:03.550410 | orchestrator | 2026-02-23 20:50:03 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:03.552719 | orchestrator | 2026-02-23 20:50:03 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:50:03.552768 | orchestrator | 2026-02-23 20:50:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:06.588479 | orchestrator | 2026-02-23 20:50:06 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:06.589485 | orchestrator | 2026-02-23 20:50:06 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:06.591456 | orchestrator | 2026-02-23 20:50:06 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:06.593233 | orchestrator | 2026-02-23 20:50:06 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:50:06.593269 | orchestrator | 2026-02-23 20:50:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:09.634728 | orchestrator | 2026-02-23 20:50:09 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:09.634800 | orchestrator | 2026-02-23 20:50:09 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:09.635940 | orchestrator | 2026-02-23 20:50:09 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:09.637388 | orchestrator | 2026-02-23 20:50:09 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:50:09.637445 | orchestrator | 2026-02-23 20:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:12.682739 | orchestrator | 2026-02-23 20:50:12 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:12.683601 | orchestrator | 2026-02-23 20:50:12 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:12.685501 | orchestrator | 2026-02-23 20:50:12 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:12.689282 | orchestrator | 2026-02-23 20:50:12 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state STARTED 2026-02-23 20:50:12.689341 | orchestrator | 2026-02-23 20:50:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:15.733370 | orchestrator | 2026-02-23 20:50:15 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:15.735493 | orchestrator | 2026-02-23 20:50:15 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:15.738089 | orchestrator | 2026-02-23 20:50:15 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:15.742597 | orchestrator | 2026-02-23 20:50:15 | INFO  | Task 0e796bf6-177f-497a-9428-b97130b133bc is in state SUCCESS 2026-02-23 20:50:15.744234 | orchestrator | 2026-02-23 20:50:15.744280 | orchestrator | 2026-02-23 20:50:15.744287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:50:15.744293 | orchestrator | 2026-02-23 20:50:15.744298 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:50:15.744304 | orchestrator | Monday 23 February 2026 20:47:42 +0000 (0:00:00.431) 0:00:00.431 ******* 2026-02-23 20:50:15.744309 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:50:15.744319 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:50:15.744323 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:50:15.744326 | orchestrator | 2026-02-23 20:50:15.744330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:50:15.744333 | orchestrator | Monday 23 February 2026 20:47:43 +0000 (0:00:00.633) 0:00:01.064 ******* 2026-02-23 20:50:15.744336 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-23 20:50:15.744349 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-23 20:50:15.744353 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-23 20:50:15.744356 | orchestrator | 2026-02-23 20:50:15.744359 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-23 20:50:15.744362 | orchestrator | 2026-02-23 20:50:15.744366 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-23 20:50:15.744393 | orchestrator | Monday 23 February 2026 20:47:43 +0000 (0:00:00.428) 0:00:01.493 ******* 2026-02-23 20:50:15.744423 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:50:15.744428 | orchestrator | 2026-02-23 20:50:15.744431 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-23 20:50:15.744434 | orchestrator | Monday 23 February 2026 20:47:44 +0000 (0:00:00.903) 0:00:02.397 ******* 2026-02-23 20:50:15.744438 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-23 20:50:15.744441 | orchestrator | 2026-02-23 20:50:15.744445 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-23 20:50:15.744448 | orchestrator | Monday 23 February 2026 20:47:47 +0000 (0:00:03.065) 0:00:05.462 ******* 2026-02-23 20:50:15.744451 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-23 20:50:15.744455 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-23 20:50:15.744458 | orchestrator | 2026-02-23 20:50:15.744477 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-23 20:50:15.744499 | orchestrator | Monday 23 February 2026 20:47:52 +0000 (0:00:05.562) 0:00:11.025 ******* 2026-02-23 20:50:15.744503 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:50:15.744507 | orchestrator | 2026-02-23 20:50:15.744510 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-23 20:50:15.744513 | orchestrator | Monday 23 February 2026 20:47:56 +0000 (0:00:03.480) 0:00:14.506 ******* 2026-02-23 20:50:15.744516 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-23 20:50:15.744520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:50:15.744523 | orchestrator | 2026-02-23 20:50:15.744526 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-23 20:50:15.744537 | orchestrator | Monday 23 February 2026 20:48:00 +0000 (0:00:03.829) 0:00:18.335 ******* 2026-02-23 20:50:15.744540 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:50:15.744543 | orchestrator | 2026-02-23 20:50:15.744555 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-23 20:50:15.744558 | orchestrator | Monday 23 February 2026 20:48:03 +0000 (0:00:03.235) 0:00:21.571 ******* 2026-02-23 20:50:15.744567 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-23 20:50:15.744591 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-23 20:50:15.744595 | orchestrator | 2026-02-23 20:50:15.744598 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-23 20:50:15.744601 | orchestrator | Monday 23 February 2026 20:48:09 +0000 (0:00:06.353) 0:00:27.925 ******* 2026-02-23 20:50:15.744607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.744620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.744628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.744632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.744881 | orchestrator | 2026-02-23 20:50:15.744884 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-23 20:50:15.744888 | orchestrator | Monday 23 February 2026 20:48:11 +0000 (0:00:02.023) 0:00:29.948 ******* 2026-02-23 20:50:15.744891 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.744895 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.744898 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.744901 | orchestrator | 2026-02-23 20:50:15.744904 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-23 20:50:15.744907 | orchestrator | Monday 23 February 2026 20:48:12 +0000 (0:00:00.253) 0:00:30.202 ******* 2026-02-23 20:50:15.744910 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:50:15.744913 | orchestrator | 2026-02-23 20:50:15.744917 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-23 20:50:15.744920 | orchestrator | Monday 23 February 2026 20:48:12 +0000 (0:00:00.589) 0:00:30.791 ******* 2026-02-23 20:50:15.744926 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-23 20:50:15.744929 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-23 20:50:15.744932 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-23 20:50:15.744935 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-23 20:50:15.744939 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-23 20:50:15.744942 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-23 20:50:15.744945 | orchestrator | 2026-02-23 20:50:15.744948 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-23 20:50:15.744951 | orchestrator | Monday 23 February 2026 20:48:14 +0000 (0:00:01.724) 0:00:32.516 ******* 2026-02-23 20:50:15.744955 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-23 20:50:15.744965 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-23 20:50:15.744969 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-23 20:50:15.744973 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-23 20:50:15.744979 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-23 20:50:15.744983 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-23 20:50:15.744988 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-23 20:50:15.744994 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-23 20:50:15.744997 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-23 20:50:15.745002 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-23 20:50:15.745006 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-23 20:50:15.745009 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-23 20:50:15.745014 | orchestrator | 2026-02-23 20:50:15.745018 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-23 20:50:15.745021 | orchestrator | Monday 23 February 2026 20:48:18 +0000 (0:00:04.346) 0:00:36.862 ******* 2026-02-23 20:50:15.745024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:50:15.745027 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:50:15.745030 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-23 20:50:15.745034 | orchestrator | 2026-02-23 20:50:15.745037 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-23 20:50:15.745040 | orchestrator | Monday 23 February 2026 20:48:20 +0000 (0:00:01.703) 0:00:38.566 ******* 2026-02-23 20:50:15.745043 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-23 20:50:15.745048 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-23 20:50:15.745051 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-23 20:50:15.745054 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:50:15.745058 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:50:15.745061 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-23 20:50:15.745064 | orchestrator | 2026-02-23 20:50:15.745067 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-23 20:50:15.745070 | orchestrator | Monday 23 February 2026 20:48:23 +0000 (0:00:02.784) 0:00:41.351 ******* 2026-02-23 20:50:15.745073 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-23 20:50:15.745077 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-23 20:50:15.745080 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-23 20:50:15.745083 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-23 20:50:15.745086 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-23 20:50:15.745090 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-23 20:50:15.745095 | orchestrator | 2026-02-23 20:50:15.745100 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-23 20:50:15.745105 | orchestrator | Monday 23 February 2026 20:48:24 +0000 (0:00:01.040) 0:00:42.391 ******* 2026-02-23 20:50:15.745109 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745114 | orchestrator | 2026-02-23 20:50:15.745119 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-23 20:50:15.745123 | orchestrator | Monday 23 February 2026 20:48:24 +0000 (0:00:00.101) 0:00:42.492 ******* 2026-02-23 20:50:15.745232 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745238 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.745241 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.745245 | orchestrator | 2026-02-23 20:50:15.745250 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-23 20:50:15.745255 | orchestrator | Monday 23 February 2026 20:48:24 +0000 (0:00:00.282) 0:00:42.775 ******* 2026-02-23 20:50:15.745260 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:50:15.745273 | orchestrator | 2026-02-23 20:50:15.745278 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-23 20:50:15.745296 | orchestrator | Monday 23 February 2026 20:48:25 +0000 (0:00:00.734) 0:00:43.509 ******* 2026-02-23 20:50:15.745302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/ci2026-02-23 20:50:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:15.745401 | orchestrator | nder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745404 | orchestrator | 2026-02-23 20:50:15.745407 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-23 20:50:15.745410 | orchestrator | Monday 23 February 2026 20:48:28 +0000 (0:00:03.528) 0:00:47.038 ******* 2026-02-23 20:50:15.745413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745445 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.745450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745453 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.745456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745474 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745477 | orchestrator | 2026-02-23 20:50:15.745480 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-23 20:50:15.745483 | orchestrator | Monday 23 February 2026 20:48:29 +0000 (0:00:00.664) 0:00:47.702 ******* 2026-02-23 20:50:15.745488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745506 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745526 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.745532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745545 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.745548 | orchestrator | 2026-02-23 20:50:15.745551 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-23 20:50:15.745555 | orchestrator | Monday 23 February 2026 20:48:30 +0000 (0:00:01.053) 0:00:48.756 ******* 2026-02-23 20:50:15.745559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745614 | orchestrator | 2026-02-23 20:50:15.745618 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-23 20:50:15.745621 | orchestrator | Monday 23 February 2026 20:48:34 +0000 (0:00:04.241) 0:00:52.997 ******* 2026-02-23 20:50:15.745624 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-23 20:50:15.745627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-23 20:50:15.745630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-23 20:50:15.745633 | orchestrator | 2026-02-23 20:50:15.745636 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-23 20:50:15.745640 | orchestrator | Monday 23 February 2026 20:48:36 +0000 (0:00:01.795) 0:00:54.793 ******* 2026-02-23 20:50:15.745645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745696 | orchestrator | 2026-02-23 20:50:15.745699 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-23 20:50:15.745702 | orchestrator | Monday 23 February 2026 20:48:47 +0000 (0:00:10.557) 0:01:05.351 ******* 2026-02-23 20:50:15.745706 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.745709 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:50:15.745712 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:50:15.745715 | orchestrator | 2026-02-23 20:50:15.745720 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-23 20:50:15.745723 | orchestrator | Monday 23 February 2026 20:48:48 +0000 (0:00:01.563) 0:01:06.914 ******* 2026-02-23 20:50:15.745726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745743 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-23 20:50:15.745766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745772 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.745776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-23 20:50:15.745787 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.745790 | orchestrator | 2026-02-23 20:50:15.745794 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-23 20:50:15.745797 | orchestrator | Monday 23 February 2026 20:48:49 +0000 (0:00:00.617) 0:01:07.532 ******* 2026-02-23 20:50:15.745800 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745803 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.745806 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.745809 | orchestrator | 2026-02-23 20:50:15.745812 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-23 20:50:15.745816 | orchestrator | Monday 23 February 2026 20:48:49 +0000 (0:00:00.306) 0:01:07.838 ******* 2026-02-23 20:50:15.745819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-23 20:50:15.745833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-23 20:50:15.745878 | orchestrator | 2026-02-23 20:50:15.745882 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-23 20:50:15.745885 | orchestrator | Monday 23 February 2026 20:48:52 +0000 (0:00:02.830) 0:01:10.669 ******* 2026-02-23 20:50:15.745889 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.745892 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:50:15.745896 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:50:15.745899 | orchestrator | 2026-02-23 20:50:15.745903 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-23 20:50:15.745906 | orchestrator | Monday 23 February 2026 20:48:53 +0000 (0:00:00.415) 0:01:11.084 ******* 2026-02-23 20:50:15.745909 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.745913 | orchestrator | 2026-02-23 20:50:15.745916 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-23 20:50:15.745919 | orchestrator | Monday 23 February 2026 20:48:55 +0000 (0:00:02.161) 0:01:13.245 ******* 2026-02-23 20:50:15.745923 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.745926 | orchestrator | 2026-02-23 20:50:15.745930 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-23 20:50:15.745935 | orchestrator | Monday 23 February 2026 20:48:57 +0000 (0:00:02.164) 0:01:15.410 ******* 2026-02-23 20:50:15.745939 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.745942 | orchestrator | 2026-02-23 20:50:15.745946 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-23 20:50:15.745949 | orchestrator | Monday 23 February 2026 20:49:15 +0000 (0:00:18.151) 0:01:33.561 ******* 2026-02-23 20:50:15.745953 | orchestrator | 2026-02-23 20:50:15.745956 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-23 20:50:15.745959 | orchestrator | Monday 23 February 2026 20:49:15 +0000 (0:00:00.132) 0:01:33.694 ******* 2026-02-23 20:50:15.745963 | orchestrator | 2026-02-23 20:50:15.745966 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-23 20:50:15.745970 | orchestrator | Monday 23 February 2026 20:49:15 +0000 (0:00:00.070) 0:01:33.764 ******* 2026-02-23 20:50:15.745973 | orchestrator | 2026-02-23 20:50:15.745976 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-23 20:50:15.745980 | orchestrator | Monday 23 February 2026 20:49:15 +0000 (0:00:00.072) 0:01:33.837 ******* 2026-02-23 20:50:15.745983 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.745986 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:50:15.745993 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:50:15.745996 | orchestrator | 2026-02-23 20:50:15.745999 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-23 20:50:15.746003 | orchestrator | Monday 23 February 2026 20:49:39 +0000 (0:00:24.011) 0:01:57.849 ******* 2026-02-23 20:50:15.746006 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.746009 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:50:15.746042 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:50:15.746046 | orchestrator | 2026-02-23 20:50:15.746049 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-23 20:50:15.746053 | orchestrator | Monday 23 February 2026 20:49:49 +0000 (0:00:09.894) 0:02:07.743 ******* 2026-02-23 20:50:15.746057 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.746060 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:50:15.746063 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:50:15.746067 | orchestrator | 2026-02-23 20:50:15.746070 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-23 20:50:15.746074 | orchestrator | Monday 23 February 2026 20:50:09 +0000 (0:00:19.297) 0:02:27.041 ******* 2026-02-23 20:50:15.746077 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:50:15.746080 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:50:15.746084 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:50:15.746087 | orchestrator | 2026-02-23 20:50:15.746095 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-23 20:50:15.746104 | orchestrator | Monday 23 February 2026 20:50:14 +0000 (0:00:05.774) 0:02:32.816 ******* 2026-02-23 20:50:15.746110 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:50:15.746115 | orchestrator | 2026-02-23 20:50:15.746120 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:50:15.746125 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 20:50:15.746132 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:50:15.746137 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:50:15.746143 | orchestrator | 2026-02-23 20:50:15.746149 | orchestrator | 2026-02-23 20:50:15.746159 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:50:15.746163 | orchestrator | Monday 23 February 2026 20:50:15 +0000 (0:00:00.268) 0:02:33.084 ******* 2026-02-23 20:50:15.746167 | orchestrator | =============================================================================== 2026-02-23 20:50:15.746170 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.01s 2026-02-23 20:50:15.746174 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.30s 2026-02-23 20:50:15.746177 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.15s 2026-02-23 20:50:15.746181 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.56s 2026-02-23 20:50:15.746184 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.89s 2026-02-23 20:50:15.746188 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.35s 2026-02-23 20:50:15.746191 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.78s 2026-02-23 20:50:15.746195 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.56s 2026-02-23 20:50:15.746198 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.35s 2026-02-23 20:50:15.746202 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.24s 2026-02-23 20:50:15.746205 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.83s 2026-02-23 20:50:15.746209 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.53s 2026-02-23 20:50:15.746216 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.48s 2026-02-23 20:50:15.746220 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.24s 2026-02-23 20:50:15.746224 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.07s 2026-02-23 20:50:15.746227 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.83s 2026-02-23 20:50:15.746231 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.78s 2026-02-23 20:50:15.746236 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.16s 2026-02-23 20:50:15.746240 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.16s 2026-02-23 20:50:15.746244 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.02s 2026-02-23 20:50:18.795056 | orchestrator | 2026-02-23 20:50:18 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:18.796634 | orchestrator | 2026-02-23 20:50:18 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:18.801672 | orchestrator | 2026-02-23 20:50:18 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:18.802203 | orchestrator | 2026-02-23 20:50:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:21.842480 | orchestrator | 2026-02-23 20:50:21 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:21.844061 | orchestrator | 2026-02-23 20:50:21 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:21.845806 | orchestrator | 2026-02-23 20:50:21 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:21.845842 | orchestrator | 2026-02-23 20:50:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:24.891638 | orchestrator | 2026-02-23 20:50:24 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:24.894421 | orchestrator | 2026-02-23 20:50:24 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:24.896423 | orchestrator | 2026-02-23 20:50:24 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:24.896468 | orchestrator | 2026-02-23 20:50:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:27.943828 | orchestrator | 2026-02-23 20:50:27 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:27.946403 | orchestrator | 2026-02-23 20:50:27 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:27.948776 | orchestrator | 2026-02-23 20:50:27 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:27.948880 | orchestrator | 2026-02-23 20:50:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:30.991821 | orchestrator | 2026-02-23 20:50:30 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:30.993388 | orchestrator | 2026-02-23 20:50:30 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:30.996788 | orchestrator | 2026-02-23 20:50:30 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:30.997132 | orchestrator | 2026-02-23 20:50:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:34.040948 | orchestrator | 2026-02-23 20:50:34 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:34.041731 | orchestrator | 2026-02-23 20:50:34 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:34.043134 | orchestrator | 2026-02-23 20:50:34 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:34.043192 | orchestrator | 2026-02-23 20:50:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:37.103812 | orchestrator | 2026-02-23 20:50:37 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:37.105588 | orchestrator | 2026-02-23 20:50:37 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:37.108732 | orchestrator | 2026-02-23 20:50:37 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:37.108775 | orchestrator | 2026-02-23 20:50:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:40.149353 | orchestrator | 2026-02-23 20:50:40 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:40.151228 | orchestrator | 2026-02-23 20:50:40 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:40.153077 | orchestrator | 2026-02-23 20:50:40 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:40.153376 | orchestrator | 2026-02-23 20:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:43.195997 | orchestrator | 2026-02-23 20:50:43 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:43.199493 | orchestrator | 2026-02-23 20:50:43 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:43.200806 | orchestrator | 2026-02-23 20:50:43 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:43.200836 | orchestrator | 2026-02-23 20:50:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:46.244979 | orchestrator | 2026-02-23 20:50:46 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:46.247390 | orchestrator | 2026-02-23 20:50:46 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:46.248866 | orchestrator | 2026-02-23 20:50:46 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:46.248915 | orchestrator | 2026-02-23 20:50:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:49.292473 | orchestrator | 2026-02-23 20:50:49 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:49.294167 | orchestrator | 2026-02-23 20:50:49 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:49.297693 | orchestrator | 2026-02-23 20:50:49 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:49.297882 | orchestrator | 2026-02-23 20:50:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:52.336702 | orchestrator | 2026-02-23 20:50:52 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:52.338321 | orchestrator | 2026-02-23 20:50:52 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:52.340004 | orchestrator | 2026-02-23 20:50:52 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:52.340421 | orchestrator | 2026-02-23 20:50:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:55.378119 | orchestrator | 2026-02-23 20:50:55 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:55.379132 | orchestrator | 2026-02-23 20:50:55 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:55.381125 | orchestrator | 2026-02-23 20:50:55 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:55.381240 | orchestrator | 2026-02-23 20:50:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:50:58.422302 | orchestrator | 2026-02-23 20:50:58 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:50:58.424046 | orchestrator | 2026-02-23 20:50:58 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:50:58.425787 | orchestrator | 2026-02-23 20:50:58 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:50:58.425837 | orchestrator | 2026-02-23 20:50:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:01.467917 | orchestrator | 2026-02-23 20:51:01 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:01.469809 | orchestrator | 2026-02-23 20:51:01 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:01.471335 | orchestrator | 2026-02-23 20:51:01 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:01.471418 | orchestrator | 2026-02-23 20:51:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:04.514719 | orchestrator | 2026-02-23 20:51:04 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:04.518280 | orchestrator | 2026-02-23 20:51:04 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:04.520789 | orchestrator | 2026-02-23 20:51:04 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:04.520852 | orchestrator | 2026-02-23 20:51:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:07.559722 | orchestrator | 2026-02-23 20:51:07 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:07.561911 | orchestrator | 2026-02-23 20:51:07 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:07.563777 | orchestrator | 2026-02-23 20:51:07 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:07.563826 | orchestrator | 2026-02-23 20:51:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:10.601696 | orchestrator | 2026-02-23 20:51:10 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:10.603131 | orchestrator | 2026-02-23 20:51:10 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:10.604398 | orchestrator | 2026-02-23 20:51:10 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:10.604498 | orchestrator | 2026-02-23 20:51:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:13.650705 | orchestrator | 2026-02-23 20:51:13 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:13.652773 | orchestrator | 2026-02-23 20:51:13 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:13.658959 | orchestrator | 2026-02-23 20:51:13 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:13.659051 | orchestrator | 2026-02-23 20:51:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:16.695896 | orchestrator | 2026-02-23 20:51:16 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:16.697733 | orchestrator | 2026-02-23 20:51:16 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:16.700295 | orchestrator | 2026-02-23 20:51:16 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:16.700966 | orchestrator | 2026-02-23 20:51:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:19.746773 | orchestrator | 2026-02-23 20:51:19 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:19.748512 | orchestrator | 2026-02-23 20:51:19 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:19.749903 | orchestrator | 2026-02-23 20:51:19 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:19.749949 | orchestrator | 2026-02-23 20:51:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:22.783136 | orchestrator | 2026-02-23 20:51:22 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:22.783236 | orchestrator | 2026-02-23 20:51:22 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:22.785240 | orchestrator | 2026-02-23 20:51:22 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:22.785307 | orchestrator | 2026-02-23 20:51:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:25.823246 | orchestrator | 2026-02-23 20:51:25 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:25.824812 | orchestrator | 2026-02-23 20:51:25 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:25.827794 | orchestrator | 2026-02-23 20:51:25 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:25.827840 | orchestrator | 2026-02-23 20:51:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:28.873845 | orchestrator | 2026-02-23 20:51:28 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:28.874798 | orchestrator | 2026-02-23 20:51:28 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:28.876289 | orchestrator | 2026-02-23 20:51:28 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:28.876328 | orchestrator | 2026-02-23 20:51:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:31.919114 | orchestrator | 2026-02-23 20:51:31 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:31.922326 | orchestrator | 2026-02-23 20:51:31 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:31.924130 | orchestrator | 2026-02-23 20:51:31 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:31.924527 | orchestrator | 2026-02-23 20:51:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:34.971190 | orchestrator | 2026-02-23 20:51:34 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:34.972678 | orchestrator | 2026-02-23 20:51:34 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:34.974239 | orchestrator | 2026-02-23 20:51:34 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:34.974281 | orchestrator | 2026-02-23 20:51:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:38.012706 | orchestrator | 2026-02-23 20:51:38 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:38.012908 | orchestrator | 2026-02-23 20:51:38 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:38.014250 | orchestrator | 2026-02-23 20:51:38 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:38.014288 | orchestrator | 2026-02-23 20:51:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:41.060075 | orchestrator | 2026-02-23 20:51:41 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state STARTED 2026-02-23 20:51:41.061947 | orchestrator | 2026-02-23 20:51:41 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state STARTED 2026-02-23 20:51:41.064146 | orchestrator | 2026-02-23 20:51:41 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:41.064206 | orchestrator | 2026-02-23 20:51:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:44.107663 | orchestrator | 2026-02-23 20:51:44 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:51:44.112538 | orchestrator | 2026-02-23 20:51:44 | INFO  | Task a11c02ee-b5ac-4631-8b68-069314a360ff is in state SUCCESS 2026-02-23 20:51:44.114707 | orchestrator | 2026-02-23 20:51:44.114865 | orchestrator | 2026-02-23 20:51:44.114873 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:51:44.114878 | orchestrator | 2026-02-23 20:51:44.114883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:51:44.114887 | orchestrator | Monday 23 February 2026 20:49:45 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-23 20:51:44.114892 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:51:44.114897 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:51:44.114901 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:51:44.114905 | orchestrator | 2026-02-23 20:51:44.114910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:51:44.114915 | orchestrator | Monday 23 February 2026 20:49:45 +0000 (0:00:00.295) 0:00:00.557 ******* 2026-02-23 20:51:44.114919 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-23 20:51:44.114923 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-23 20:51:44.114927 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-23 20:51:44.114931 | orchestrator | 2026-02-23 20:51:44.114934 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-23 20:51:44.114938 | orchestrator | 2026-02-23 20:51:44.114942 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-23 20:51:44.114946 | orchestrator | Monday 23 February 2026 20:49:46 +0000 (0:00:00.419) 0:00:00.976 ******* 2026-02-23 20:51:44.114950 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:51:44.114954 | orchestrator | 2026-02-23 20:51:44.114957 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-23 20:51:44.114961 | orchestrator | Monday 23 February 2026 20:49:46 +0000 (0:00:00.511) 0:00:01.487 ******* 2026-02-23 20:51:44.114967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.114972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115018 | orchestrator | 2026-02-23 20:51:44.115022 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-23 20:51:44.115026 | orchestrator | Monday 23 February 2026 20:49:47 +0000 (0:00:00.663) 0:00:02.151 ******* 2026-02-23 20:51:44.115030 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-23 20:51:44.115034 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-23 20:51:44.115038 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:51:44.115042 | orchestrator | 2026-02-23 20:51:44.115046 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-23 20:51:44.115200 | orchestrator | Monday 23 February 2026 20:49:48 +0000 (0:00:00.847) 0:00:02.999 ******* 2026-02-23 20:51:44.115209 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:51:44.115215 | orchestrator | 2026-02-23 20:51:44.115221 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-23 20:51:44.115227 | orchestrator | Monday 23 February 2026 20:49:48 +0000 (0:00:00.679) 0:00:03.679 ******* 2026-02-23 20:51:44.115243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115262 | orchestrator | 2026-02-23 20:51:44.115268 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-23 20:51:44.115281 | orchestrator | Monday 23 February 2026 20:49:50 +0000 (0:00:01.190) 0:00:04.870 ******* 2026-02-23 20:51:44.115288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:51:44.115294 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.115305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:51:44.115313 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.115322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:51:44.115386 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.115392 | orchestrator | 2026-02-23 20:51:44.115396 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-23 20:51:44.115400 | orchestrator | Monday 23 February 2026 20:49:50 +0000 (0:00:00.449) 0:00:05.319 ******* 2026-02-23 20:51:44.115404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:51:44.115408 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.115411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:51:44.115419 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.115423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-23 20:51:44.115427 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.115431 | orchestrator | 2026-02-23 20:51:44.115434 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-23 20:51:44.115438 | orchestrator | Monday 23 February 2026 20:49:51 +0000 (0:00:00.766) 0:00:06.086 ******* 2026-02-23 20:51:44.115444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115584 | orchestrator | 2026-02-23 20:51:44.115590 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-23 20:51:44.115597 | orchestrator | Monday 23 February 2026 20:49:52 +0000 (0:00:01.110) 0:00:07.196 ******* 2026-02-23 20:51:44.115603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.115629 | orchestrator | 2026-02-23 20:51:44.115635 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-23 20:51:44.115642 | orchestrator | Monday 23 February 2026 20:49:53 +0000 (0:00:01.210) 0:00:08.406 ******* 2026-02-23 20:51:44.115648 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.115655 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.115662 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.115668 | orchestrator | 2026-02-23 20:51:44.115675 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-23 20:51:44.115679 | orchestrator | Monday 23 February 2026 20:49:54 +0000 (0:00:00.385) 0:00:08.792 ******* 2026-02-23 20:51:44.115685 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-23 20:51:44.115690 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-23 20:51:44.115693 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-23 20:51:44.115697 | orchestrator | 2026-02-23 20:51:44.115701 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-23 20:51:44.115708 | orchestrator | Monday 23 February 2026 20:49:55 +0000 (0:00:01.085) 0:00:09.877 ******* 2026-02-23 20:51:44.115714 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-23 20:51:44.115721 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-23 20:51:44.115728 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-23 20:51:44.115734 | orchestrator | 2026-02-23 20:51:44.115740 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-23 20:51:44.115747 | orchestrator | Monday 23 February 2026 20:49:56 +0000 (0:00:01.061) 0:00:10.939 ******* 2026-02-23 20:51:44.115771 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:51:44.115779 | orchestrator | 2026-02-23 20:51:44.115786 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-23 20:51:44.115792 | orchestrator | Monday 23 February 2026 20:49:56 +0000 (0:00:00.693) 0:00:11.632 ******* 2026-02-23 20:51:44.115799 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-23 20:51:44.115803 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-23 20:51:44.115811 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:51:44.115815 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:51:44.115819 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:51:44.115822 | orchestrator | 2026-02-23 20:51:44.115826 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-23 20:51:44.115830 | orchestrator | Monday 23 February 2026 20:49:57 +0000 (0:00:00.600) 0:00:12.232 ******* 2026-02-23 20:51:44.115834 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.115837 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.115841 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.115845 | orchestrator | 2026-02-23 20:51:44.115849 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-23 20:51:44.115852 | orchestrator | Monday 23 February 2026 20:49:57 +0000 (0:00:00.407) 0:00:12.640 ******* 2026-02-23 20:51:44.115910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1106175, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.680157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.115919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1106175, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.680157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.115926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1106175, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.680157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1106848, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8362749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1106848, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8362749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1106848, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8362749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1106876, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8476543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1106876, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8476543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1106876, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8476543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1106843, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8337648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1106843, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8337648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1106843, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8337648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1106877, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8487651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1106877, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8487651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1106877, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8487651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1106181, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6817622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1106181, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6817622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1106181, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6817622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1106861, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.841287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1106861, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.841287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1106861, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.841287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1106873, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8457792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1106873, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8457792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1106873, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8457792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1106172, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6786234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1106172, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6786234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1106172, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6786234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1106180, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6807623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1106180, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6807623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1106180, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.6807623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1106847, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8359888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1106847, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8359888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1106847, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8359888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1106870, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8427649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1106870, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8427649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1106870, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8427649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1106875, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8473737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1106875, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8473737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1106875, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8473737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1106841, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8337648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1106841, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8337648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1106841, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8337648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1106872, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8452687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1106872, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8452687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1106872, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8452687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1106878, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.850765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1106878, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.850765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1106878, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.850765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1106869, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8423157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1106869, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8423157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1106869, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8423157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1106857, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.840354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1106857, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.840354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1106857, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.840354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1106853, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8385034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1106853, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8385034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1106853, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8385034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1106871, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8441632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1106871, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8441632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1106871, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8441632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1106850, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8377137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1106850, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8377137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1106850, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8377137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1106874, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8469512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1106874, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8469512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1106874, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8469512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1106183, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8314984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1106183, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8314984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1106183, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8314984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1106948, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8841765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1106948, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8841765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1106948, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8841765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1106886, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8597653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1106886, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8597653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1106886, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8597653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1106883, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.853765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1106883, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.853765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1106883, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.853765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1106891, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8640797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1106880, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8521247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1106891, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8640797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1106891, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8640797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1106880, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8521247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1106880, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8521247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1106918, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8741784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1106892, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8702617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1106918, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8741784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1106918, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8741784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1106919, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8748195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1106892, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8702617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1106892, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8702617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1106944, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.882704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1106919, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8748195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1106919, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8748195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1106917, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8734112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1106944, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.882704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1106944, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.882704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1106888, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8627653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1106917, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8734112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1106917, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8734112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1106885, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8567653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1106888, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8627653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1106888, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8627653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1106887, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8597653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1106885, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8567653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1106885, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8567653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1106884, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.855765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1106887, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8597653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1106887, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8597653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1106890, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8627653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1106884, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.855765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1106884, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.855765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1106938, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8823347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1106890, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8627653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1106890, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8627653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1106938, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8823347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1106924, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8775792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1106938, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8823347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1106881, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8523939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1106924, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8775792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1106924, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8775792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1106882, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.853765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1106881, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8523939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1106881, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8523939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1106908, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8734112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1106882, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.853765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1106882, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.853765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1106908, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8734112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1106920, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.875558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1106908, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.8734112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1106920, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.875558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1106920, 'dev': 152, 'nlink': 1, 'atime': 1771804946.0, 'mtime': 1771804946.0, 'ctime': 1771876884.875558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-23 20:51:44.116982 | orchestrator | 2026-02-23 20:51:44.116986 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-23 20:51:44.116991 | orchestrator | Monday 23 February 2026 20:50:35 +0000 (0:00:37.622) 0:00:50.262 ******* 2026-02-23 20:51:44.116995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.116999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.117003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-23 20:51:44.117007 | orchestrator | 2026-02-23 20:51:44.117011 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-23 20:51:44.117015 | orchestrator | Monday 23 February 2026 20:50:36 +0000 (0:00:00.890) 0:00:51.153 ******* 2026-02-23 20:51:44.117020 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:51:44.117025 | orchestrator | 2026-02-23 20:51:44.117029 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-23 20:51:44.117032 | orchestrator | Monday 23 February 2026 20:50:38 +0000 (0:00:01.993) 0:00:53.146 ******* 2026-02-23 20:51:44.117036 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:51:44.117040 | orchestrator | 2026-02-23 20:51:44.117046 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-23 20:51:44.117052 | orchestrator | Monday 23 February 2026 20:50:40 +0000 (0:00:02.144) 0:00:55.291 ******* 2026-02-23 20:51:44.117058 | orchestrator | 2026-02-23 20:51:44.117063 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-23 20:51:44.117070 | orchestrator | Monday 23 February 2026 20:50:40 +0000 (0:00:00.080) 0:00:55.371 ******* 2026-02-23 20:51:44.117076 | orchestrator | 2026-02-23 20:51:44.117082 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-23 20:51:44.117092 | orchestrator | Monday 23 February 2026 20:50:40 +0000 (0:00:00.167) 0:00:55.539 ******* 2026-02-23 20:51:44.117098 | orchestrator | 2026-02-23 20:51:44.117105 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-23 20:51:44.117111 | orchestrator | Monday 23 February 2026 20:50:40 +0000 (0:00:00.060) 0:00:55.599 ******* 2026-02-23 20:51:44.117117 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.117126 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.117130 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:51:44.117134 | orchestrator | 2026-02-23 20:51:44.117137 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-23 20:51:44.117141 | orchestrator | Monday 23 February 2026 20:50:42 +0000 (0:00:01.671) 0:00:57.271 ******* 2026-02-23 20:51:44.117161 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.117165 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.117168 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-23 20:51:44.117173 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-23 20:51:44.117176 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:51:44.117181 | orchestrator | 2026-02-23 20:51:44.117185 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-23 20:51:44.117189 | orchestrator | Monday 23 February 2026 20:51:10 +0000 (0:00:27.495) 0:01:24.767 ******* 2026-02-23 20:51:44.117194 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.117198 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:51:44.117202 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:51:44.117206 | orchestrator | 2026-02-23 20:51:44.117210 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-23 20:51:44.117217 | orchestrator | Monday 23 February 2026 20:51:38 +0000 (0:00:28.007) 0:01:52.775 ******* 2026-02-23 20:51:44.117224 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:51:44.117233 | orchestrator | 2026-02-23 20:51:44.117241 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-23 20:51:44.117249 | orchestrator | Monday 23 February 2026 20:51:39 +0000 (0:00:01.941) 0:01:54.717 ******* 2026-02-23 20:51:44.117255 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.117261 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:51:44.117267 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:51:44.117272 | orchestrator | 2026-02-23 20:51:44.117279 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-23 20:51:44.117285 | orchestrator | Monday 23 February 2026 20:51:40 +0000 (0:00:00.433) 0:01:55.150 ******* 2026-02-23 20:51:44.117291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-23 20:51:44.117299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-23 20:51:44.117307 | orchestrator | 2026-02-23 20:51:44.117313 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-23 20:51:44.117320 | orchestrator | Monday 23 February 2026 20:51:42 +0000 (0:00:02.147) 0:01:57.298 ******* 2026-02-23 20:51:44.117326 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:51:44.117330 | orchestrator | 2026-02-23 20:51:44.117334 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:51:44.117339 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:51:44.117348 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:51:44.117353 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:51:44.117357 | orchestrator | 2026-02-23 20:51:44.117361 | orchestrator | 2026-02-23 20:51:44.117365 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:51:44.117370 | orchestrator | Monday 23 February 2026 20:51:42 +0000 (0:00:00.252) 0:01:57.551 ******* 2026-02-23 20:51:44.117374 | orchestrator | =============================================================================== 2026-02-23 20:51:44.117381 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.62s 2026-02-23 20:51:44.117385 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.01s 2026-02-23 20:51:44.117389 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.50s 2026-02-23 20:51:44.117394 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.15s 2026-02-23 20:51:44.117398 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.14s 2026-02-23 20:51:44.117402 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.99s 2026-02-23 20:51:44.117406 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.94s 2026-02-23 20:51:44.117411 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.67s 2026-02-23 20:51:44.117415 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.21s 2026-02-23 20:51:44.117419 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.19s 2026-02-23 20:51:44.117423 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.11s 2026-02-23 20:51:44.117428 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.09s 2026-02-23 20:51:44.117435 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.06s 2026-02-23 20:51:44.117440 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.89s 2026-02-23 20:51:44.117444 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2026-02-23 20:51:44.117448 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.77s 2026-02-23 20:51:44.117453 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.69s 2026-02-23 20:51:44.117459 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2026-02-23 20:51:44.117468 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.66s 2026-02-23 20:51:44.117476 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.60s 2026-02-23 20:51:44.117482 | orchestrator | 2026-02-23 20:51:44 | INFO  | Task 8b14b5be-70ff-4097-83fe-a0838bdcc0bc is in state SUCCESS 2026-02-23 20:51:44.118125 | orchestrator | 2026-02-23 20:51:44 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:44.118614 | orchestrator | 2026-02-23 20:51:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:47.154439 | orchestrator | 2026-02-23 20:51:47 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:51:47.154499 | orchestrator | 2026-02-23 20:51:47 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:47.154509 | orchestrator | 2026-02-23 20:51:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:50.203285 | orchestrator | 2026-02-23 20:51:50 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:51:50.204913 | orchestrator | 2026-02-23 20:51:50 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:50.204981 | orchestrator | 2026-02-23 20:51:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:53.234473 | orchestrator | 2026-02-23 20:51:53 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:51:53.235977 | orchestrator | 2026-02-23 20:51:53 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:53.236024 | orchestrator | 2026-02-23 20:51:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:56.271225 | orchestrator | 2026-02-23 20:51:56 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:51:56.272540 | orchestrator | 2026-02-23 20:51:56 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:56.272576 | orchestrator | 2026-02-23 20:51:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:51:59.306300 | orchestrator | 2026-02-23 20:51:59 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:51:59.307715 | orchestrator | 2026-02-23 20:51:59 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:51:59.307764 | orchestrator | 2026-02-23 20:51:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:02.345959 | orchestrator | 2026-02-23 20:52:02 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:02.346654 | orchestrator | 2026-02-23 20:52:02 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:02.346680 | orchestrator | 2026-02-23 20:52:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:05.385578 | orchestrator | 2026-02-23 20:52:05 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:05.386902 | orchestrator | 2026-02-23 20:52:05 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:05.386952 | orchestrator | 2026-02-23 20:52:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:08.436548 | orchestrator | 2026-02-23 20:52:08 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:08.437630 | orchestrator | 2026-02-23 20:52:08 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:08.437659 | orchestrator | 2026-02-23 20:52:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:11.482675 | orchestrator | 2026-02-23 20:52:11 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:11.483434 | orchestrator | 2026-02-23 20:52:11 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:11.483643 | orchestrator | 2026-02-23 20:52:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:14.522424 | orchestrator | 2026-02-23 20:52:14 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:14.525818 | orchestrator | 2026-02-23 20:52:14 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:14.526627 | orchestrator | 2026-02-23 20:52:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:17.575256 | orchestrator | 2026-02-23 20:52:17 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:17.576206 | orchestrator | 2026-02-23 20:52:17 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:17.576250 | orchestrator | 2026-02-23 20:52:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:20.617873 | orchestrator | 2026-02-23 20:52:20 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:20.618431 | orchestrator | 2026-02-23 20:52:20 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:20.618468 | orchestrator | 2026-02-23 20:52:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:23.657914 | orchestrator | 2026-02-23 20:52:23 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:23.658778 | orchestrator | 2026-02-23 20:52:23 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:23.658817 | orchestrator | 2026-02-23 20:52:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:26.709940 | orchestrator | 2026-02-23 20:52:26 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:26.712210 | orchestrator | 2026-02-23 20:52:26 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:26.712257 | orchestrator | 2026-02-23 20:52:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:29.748832 | orchestrator | 2026-02-23 20:52:29 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:29.748948 | orchestrator | 2026-02-23 20:52:29 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:29.748958 | orchestrator | 2026-02-23 20:52:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:32.784428 | orchestrator | 2026-02-23 20:52:32 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:32.784776 | orchestrator | 2026-02-23 20:52:32 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:32.784881 | orchestrator | 2026-02-23 20:52:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:35.822511 | orchestrator | 2026-02-23 20:52:35 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:35.826110 | orchestrator | 2026-02-23 20:52:35 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:35.827487 | orchestrator | 2026-02-23 20:52:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:38.866497 | orchestrator | 2026-02-23 20:52:38 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:38.866991 | orchestrator | 2026-02-23 20:52:38 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:38.867006 | orchestrator | 2026-02-23 20:52:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:41.908570 | orchestrator | 2026-02-23 20:52:41 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:41.909502 | orchestrator | 2026-02-23 20:52:41 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:41.909543 | orchestrator | 2026-02-23 20:52:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:44.970163 | orchestrator | 2026-02-23 20:52:44 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:44.970544 | orchestrator | 2026-02-23 20:52:44 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:44.970578 | orchestrator | 2026-02-23 20:52:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:48.004900 | orchestrator | 2026-02-23 20:52:48 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:48.004950 | orchestrator | 2026-02-23 20:52:48 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:48.004955 | orchestrator | 2026-02-23 20:52:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:51.059800 | orchestrator | 2026-02-23 20:52:51 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:51.061569 | orchestrator | 2026-02-23 20:52:51 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:51.061665 | orchestrator | 2026-02-23 20:52:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:54.127842 | orchestrator | 2026-02-23 20:52:54 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:54.129837 | orchestrator | 2026-02-23 20:52:54 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:54.129933 | orchestrator | 2026-02-23 20:52:54 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:52:57.165757 | orchestrator | 2026-02-23 20:52:57 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:52:57.167071 | orchestrator | 2026-02-23 20:52:57 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:52:57.167137 | orchestrator | 2026-02-23 20:52:57 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:00.211520 | orchestrator | 2026-02-23 20:53:00 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:00.215521 | orchestrator | 2026-02-23 20:53:00 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:00.215569 | orchestrator | 2026-02-23 20:53:00 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:03.256064 | orchestrator | 2026-02-23 20:53:03 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:03.257943 | orchestrator | 2026-02-23 20:53:03 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:03.258070 | orchestrator | 2026-02-23 20:53:03 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:06.297628 | orchestrator | 2026-02-23 20:53:06 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:06.299962 | orchestrator | 2026-02-23 20:53:06 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:06.300023 | orchestrator | 2026-02-23 20:53:06 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:09.330841 | orchestrator | 2026-02-23 20:53:09 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:09.333019 | orchestrator | 2026-02-23 20:53:09 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:09.333073 | orchestrator | 2026-02-23 20:53:09 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:12.375616 | orchestrator | 2026-02-23 20:53:12 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:12.375933 | orchestrator | 2026-02-23 20:53:12 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:12.375944 | orchestrator | 2026-02-23 20:53:12 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:15.419086 | orchestrator | 2026-02-23 20:53:15 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:15.420774 | orchestrator | 2026-02-23 20:53:15 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:15.420813 | orchestrator | 2026-02-23 20:53:15 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:18.469824 | orchestrator | 2026-02-23 20:53:18 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:18.471498 | orchestrator | 2026-02-23 20:53:18 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:18.471589 | orchestrator | 2026-02-23 20:53:18 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:21.516032 | orchestrator | 2026-02-23 20:53:21 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:21.517860 | orchestrator | 2026-02-23 20:53:21 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:21.518142 | orchestrator | 2026-02-23 20:53:21 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:24.551270 | orchestrator | 2026-02-23 20:53:24 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:24.552866 | orchestrator | 2026-02-23 20:53:24 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:24.552917 | orchestrator | 2026-02-23 20:53:24 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:27.598214 | orchestrator | 2026-02-23 20:53:27 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:27.600793 | orchestrator | 2026-02-23 20:53:27 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:27.601650 | orchestrator | 2026-02-23 20:53:27 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:30.631735 | orchestrator | 2026-02-23 20:53:30 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:30.633661 | orchestrator | 2026-02-23 20:53:30 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:30.633757 | orchestrator | 2026-02-23 20:53:30 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:33.676263 | orchestrator | 2026-02-23 20:53:33 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:33.677685 | orchestrator | 2026-02-23 20:53:33 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:33.677724 | orchestrator | 2026-02-23 20:53:33 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:36.723212 | orchestrator | 2026-02-23 20:53:36 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:36.725334 | orchestrator | 2026-02-23 20:53:36 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:36.725385 | orchestrator | 2026-02-23 20:53:36 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:39.781506 | orchestrator | 2026-02-23 20:53:39 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:39.782535 | orchestrator | 2026-02-23 20:53:39 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:39.783266 | orchestrator | 2026-02-23 20:53:39 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:42.834468 | orchestrator | 2026-02-23 20:53:42 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:42.834797 | orchestrator | 2026-02-23 20:53:42 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:42.834809 | orchestrator | 2026-02-23 20:53:42 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:45.878270 | orchestrator | 2026-02-23 20:53:45 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:45.881621 | orchestrator | 2026-02-23 20:53:45 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:45.881675 | orchestrator | 2026-02-23 20:53:45 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:48.928217 | orchestrator | 2026-02-23 20:53:48 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:48.929072 | orchestrator | 2026-02-23 20:53:48 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:48.929686 | orchestrator | 2026-02-23 20:53:48 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:51.980721 | orchestrator | 2026-02-23 20:53:51 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:51.981971 | orchestrator | 2026-02-23 20:53:51 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:51.982046 | orchestrator | 2026-02-23 20:53:51 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:55.034399 | orchestrator | 2026-02-23 20:53:55 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:55.036475 | orchestrator | 2026-02-23 20:53:55 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:55.036527 | orchestrator | 2026-02-23 20:53:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:53:58.081663 | orchestrator | 2026-02-23 20:53:58 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:53:58.083518 | orchestrator | 2026-02-23 20:53:58 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:53:58.083572 | orchestrator | 2026-02-23 20:53:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:01.133460 | orchestrator | 2026-02-23 20:54:01 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:01.133525 | orchestrator | 2026-02-23 20:54:01 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:01.133535 | orchestrator | 2026-02-23 20:54:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:04.170123 | orchestrator | 2026-02-23 20:54:04 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:04.171917 | orchestrator | 2026-02-23 20:54:04 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:04.171976 | orchestrator | 2026-02-23 20:54:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:07.207168 | orchestrator | 2026-02-23 20:54:07 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:07.208946 | orchestrator | 2026-02-23 20:54:07 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:07.209119 | orchestrator | 2026-02-23 20:54:07 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:10.251834 | orchestrator | 2026-02-23 20:54:10 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:10.257006 | orchestrator | 2026-02-23 20:54:10 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:10.257050 | orchestrator | 2026-02-23 20:54:10 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:13.293485 | orchestrator | 2026-02-23 20:54:13 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:13.295302 | orchestrator | 2026-02-23 20:54:13 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:13.295357 | orchestrator | 2026-02-23 20:54:13 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:16.322498 | orchestrator | 2026-02-23 20:54:16 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:16.324010 | orchestrator | 2026-02-23 20:54:16 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:16.324051 | orchestrator | 2026-02-23 20:54:16 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:19.349205 | orchestrator | 2026-02-23 20:54:19 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:19.349616 | orchestrator | 2026-02-23 20:54:19 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:19.349923 | orchestrator | 2026-02-23 20:54:19 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:22.392939 | orchestrator | 2026-02-23 20:54:22 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:22.394480 | orchestrator | 2026-02-23 20:54:22 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:22.394512 | orchestrator | 2026-02-23 20:54:22 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:25.436885 | orchestrator | 2026-02-23 20:54:25 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:25.437455 | orchestrator | 2026-02-23 20:54:25 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:25.437711 | orchestrator | 2026-02-23 20:54:25 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:28.477272 | orchestrator | 2026-02-23 20:54:28 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:28.477323 | orchestrator | 2026-02-23 20:54:28 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:28.477330 | orchestrator | 2026-02-23 20:54:28 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:31.526258 | orchestrator | 2026-02-23 20:54:31 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:31.530176 | orchestrator | 2026-02-23 20:54:31 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:31.530290 | orchestrator | 2026-02-23 20:54:31 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:34.572125 | orchestrator | 2026-02-23 20:54:34 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:34.572908 | orchestrator | 2026-02-23 20:54:34 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:34.573147 | orchestrator | 2026-02-23 20:54:34 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:37.614461 | orchestrator | 2026-02-23 20:54:37 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:37.614519 | orchestrator | 2026-02-23 20:54:37 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:37.614528 | orchestrator | 2026-02-23 20:54:37 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:40.647255 | orchestrator | 2026-02-23 20:54:40 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:40.647314 | orchestrator | 2026-02-23 20:54:40 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:40.647386 | orchestrator | 2026-02-23 20:54:40 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:43.682371 | orchestrator | 2026-02-23 20:54:43 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:43.683320 | orchestrator | 2026-02-23 20:54:43 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:43.683363 | orchestrator | 2026-02-23 20:54:43 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:46.725994 | orchestrator | 2026-02-23 20:54:46 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:46.728711 | orchestrator | 2026-02-23 20:54:46 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:46.728858 | orchestrator | 2026-02-23 20:54:46 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:49.770829 | orchestrator | 2026-02-23 20:54:49 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:49.772517 | orchestrator | 2026-02-23 20:54:49 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:49.772573 | orchestrator | 2026-02-23 20:54:49 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:52.820831 | orchestrator | 2026-02-23 20:54:52 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:52.823585 | orchestrator | 2026-02-23 20:54:52 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:52.826407 | orchestrator | 2026-02-23 20:54:52 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:55.866386 | orchestrator | 2026-02-23 20:54:55 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:55.868242 | orchestrator | 2026-02-23 20:54:55 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:55.868285 | orchestrator | 2026-02-23 20:54:55 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:54:58.916319 | orchestrator | 2026-02-23 20:54:58 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:54:58.917167 | orchestrator | 2026-02-23 20:54:58 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:54:58.917362 | orchestrator | 2026-02-23 20:54:58 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:01.953006 | orchestrator | 2026-02-23 20:55:01 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:01.953327 | orchestrator | 2026-02-23 20:55:01 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:01.953343 | orchestrator | 2026-02-23 20:55:01 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:04.983594 | orchestrator | 2026-02-23 20:55:04 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:04.984092 | orchestrator | 2026-02-23 20:55:04 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:04.985915 | orchestrator | 2026-02-23 20:55:04 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:08.013335 | orchestrator | 2026-02-23 20:55:08 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:08.013472 | orchestrator | 2026-02-23 20:55:08 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:08.013483 | orchestrator | 2026-02-23 20:55:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:11.053420 | orchestrator | 2026-02-23 20:55:11 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:11.053484 | orchestrator | 2026-02-23 20:55:11 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:11.053491 | orchestrator | 2026-02-23 20:55:11 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:14.107804 | orchestrator | 2026-02-23 20:55:14 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:14.109290 | orchestrator | 2026-02-23 20:55:14 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:14.109320 | orchestrator | 2026-02-23 20:55:14 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:17.158946 | orchestrator | 2026-02-23 20:55:17 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:17.160142 | orchestrator | 2026-02-23 20:55:17 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:17.160444 | orchestrator | 2026-02-23 20:55:17 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:20.205440 | orchestrator | 2026-02-23 20:55:20 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:20.205511 | orchestrator | 2026-02-23 20:55:20 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:20.205520 | orchestrator | 2026-02-23 20:55:20 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:23.246260 | orchestrator | 2026-02-23 20:55:23 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:23.247369 | orchestrator | 2026-02-23 20:55:23 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:23.247413 | orchestrator | 2026-02-23 20:55:23 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:26.282298 | orchestrator | 2026-02-23 20:55:26 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:26.282518 | orchestrator | 2026-02-23 20:55:26 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:26.282530 | orchestrator | 2026-02-23 20:55:26 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:29.328805 | orchestrator | 2026-02-23 20:55:29 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:29.330820 | orchestrator | 2026-02-23 20:55:29 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:29.330869 | orchestrator | 2026-02-23 20:55:29 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:32.373664 | orchestrator | 2026-02-23 20:55:32 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:32.374980 | orchestrator | 2026-02-23 20:55:32 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:32.375009 | orchestrator | 2026-02-23 20:55:32 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:35.411239 | orchestrator | 2026-02-23 20:55:35 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:35.412791 | orchestrator | 2026-02-23 20:55:35 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:35.412842 | orchestrator | 2026-02-23 20:55:35 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:38.459062 | orchestrator | 2026-02-23 20:55:38 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:38.461329 | orchestrator | 2026-02-23 20:55:38 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:38.461381 | orchestrator | 2026-02-23 20:55:38 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:41.513721 | orchestrator | 2026-02-23 20:55:41 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:41.514698 | orchestrator | 2026-02-23 20:55:41 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:41.515070 | orchestrator | 2026-02-23 20:55:41 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:44.551993 | orchestrator | 2026-02-23 20:55:44 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:44.554049 | orchestrator | 2026-02-23 20:55:44 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state STARTED 2026-02-23 20:55:44.554105 | orchestrator | 2026-02-23 20:55:44 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:47.596701 | orchestrator | 2026-02-23 20:55:47 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:47.601715 | orchestrator | 2026-02-23 20:55:47 | INFO  | Task 70a7cb40-cf12-42f8-82be-b271c1362a77 is in state SUCCESS 2026-02-23 20:55:47.603469 | orchestrator | 2026-02-23 20:55:47.603518 | orchestrator | 2026-02-23 20:55:47.603524 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:55:47.603531 | orchestrator | 2026-02-23 20:55:47.603536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:55:47.603542 | orchestrator | Monday 23 February 2026 20:49:21 +0000 (0:00:00.176) 0:00:00.176 ******* 2026-02-23 20:55:47.603547 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.603553 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:55:47.603558 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:55:47.603563 | orchestrator | 2026-02-23 20:55:47.603568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:55:47.603573 | orchestrator | Monday 23 February 2026 20:49:21 +0000 (0:00:00.282) 0:00:00.459 ******* 2026-02-23 20:55:47.603579 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-23 20:55:47.603584 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-23 20:55:47.603589 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-23 20:55:47.603594 | orchestrator | 2026-02-23 20:55:47.603599 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-23 20:55:47.603604 | orchestrator | 2026-02-23 20:55:47.603659 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-23 20:55:47.603665 | orchestrator | Monday 23 February 2026 20:49:22 +0000 (0:00:00.835) 0:00:01.294 ******* 2026-02-23 20:55:47.603670 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.603675 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:55:47.603680 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:55:47.603685 | orchestrator | 2026-02-23 20:55:47.603689 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:55:47.603695 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:55:47.603701 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:55:47.603706 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:55:47.603711 | orchestrator | 2026-02-23 20:55:47.603716 | orchestrator | 2026-02-23 20:55:47.603721 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:55:47.603726 | orchestrator | Monday 23 February 2026 20:51:42 +0000 (0:02:19.979) 0:02:21.273 ******* 2026-02-23 20:55:47.603732 | orchestrator | =============================================================================== 2026-02-23 20:55:47.603737 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 139.98s 2026-02-23 20:55:47.603742 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-02-23 20:55:47.603747 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-02-23 20:55:47.603752 | orchestrator | 2026-02-23 20:55:47.603757 | orchestrator | 2026-02-23 20:55:47.603762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:55:47.603767 | orchestrator | 2026-02-23 20:55:47.603772 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-23 20:55:47.603777 | orchestrator | Monday 23 February 2026 20:47:47 +0000 (0:00:00.330) 0:00:00.330 ******* 2026-02-23 20:55:47.603782 | orchestrator | changed: [testbed-manager] 2026-02-23 20:55:47.603787 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.603799 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.603808 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.603813 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.603892 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.603942 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.603948 | orchestrator | 2026-02-23 20:55:47.603953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:55:47.603959 | orchestrator | Monday 23 February 2026 20:47:48 +0000 (0:00:01.044) 0:00:01.374 ******* 2026-02-23 20:55:47.603964 | orchestrator | changed: [testbed-manager] 2026-02-23 20:55:47.603969 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.603974 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.603979 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.603985 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.603990 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.603995 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.604000 | orchestrator | 2026-02-23 20:55:47.604005 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:55:47.604010 | orchestrator | Monday 23 February 2026 20:47:49 +0000 (0:00:01.077) 0:00:02.451 ******* 2026-02-23 20:55:47.604016 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-23 20:55:47.604021 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-23 20:55:47.604026 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-23 20:55:47.604031 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-23 20:55:47.604036 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-23 20:55:47.604041 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-23 20:55:47.604046 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-23 20:55:47.604051 | orchestrator | 2026-02-23 20:55:47.604056 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-23 20:55:47.604061 | orchestrator | 2026-02-23 20:55:47.604066 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-23 20:55:47.604072 | orchestrator | Monday 23 February 2026 20:47:50 +0000 (0:00:01.156) 0:00:03.608 ******* 2026-02-23 20:55:47.604077 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.604082 | orchestrator | 2026-02-23 20:55:47.604087 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-23 20:55:47.604091 | orchestrator | Monday 23 February 2026 20:47:51 +0000 (0:00:00.651) 0:00:04.259 ******* 2026-02-23 20:55:47.604105 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-23 20:55:47.604124 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-23 20:55:47.604129 | orchestrator | 2026-02-23 20:55:47.604133 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-23 20:55:47.604139 | orchestrator | Monday 23 February 2026 20:47:55 +0000 (0:00:04.374) 0:00:08.633 ******* 2026-02-23 20:55:47.604143 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:55:47.604157 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-23 20:55:47.604166 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604172 | orchestrator | 2026-02-23 20:55:47.604176 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-23 20:55:47.604181 | orchestrator | Monday 23 February 2026 20:47:59 +0000 (0:00:03.942) 0:00:12.575 ******* 2026-02-23 20:55:47.604186 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604190 | orchestrator | 2026-02-23 20:55:47.604195 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-23 20:55:47.604200 | orchestrator | Monday 23 February 2026 20:48:00 +0000 (0:00:01.050) 0:00:13.626 ******* 2026-02-23 20:55:47.604204 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604209 | orchestrator | 2026-02-23 20:55:47.604213 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-23 20:55:47.604217 | orchestrator | Monday 23 February 2026 20:48:03 +0000 (0:00:02.104) 0:00:15.730 ******* 2026-02-23 20:55:47.604222 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604240 | orchestrator | 2026-02-23 20:55:47.604245 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-23 20:55:47.604250 | orchestrator | Monday 23 February 2026 20:48:07 +0000 (0:00:04.790) 0:00:20.521 ******* 2026-02-23 20:55:47.604255 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604260 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604265 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604269 | orchestrator | 2026-02-23 20:55:47.604274 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-23 20:55:47.604279 | orchestrator | Monday 23 February 2026 20:48:08 +0000 (0:00:00.292) 0:00:20.813 ******* 2026-02-23 20:55:47.604284 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604289 | orchestrator | 2026-02-23 20:55:47.604294 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-23 20:55:47.604299 | orchestrator | Monday 23 February 2026 20:48:38 +0000 (0:00:30.672) 0:00:51.486 ******* 2026-02-23 20:55:47.604304 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604309 | orchestrator | 2026-02-23 20:55:47.604313 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-23 20:55:47.604318 | orchestrator | Monday 23 February 2026 20:48:53 +0000 (0:00:14.543) 0:01:06.029 ******* 2026-02-23 20:55:47.604323 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604329 | orchestrator | 2026-02-23 20:55:47.604334 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-23 20:55:47.604338 | orchestrator | Monday 23 February 2026 20:49:05 +0000 (0:00:12.543) 0:01:18.572 ******* 2026-02-23 20:55:47.604343 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604348 | orchestrator | 2026-02-23 20:55:47.604359 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-23 20:55:47.604364 | orchestrator | Monday 23 February 2026 20:49:07 +0000 (0:00:01.223) 0:01:19.795 ******* 2026-02-23 20:55:47.604368 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604373 | orchestrator | 2026-02-23 20:55:47.604377 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-23 20:55:47.604382 | orchestrator | Monday 23 February 2026 20:49:07 +0000 (0:00:00.496) 0:01:20.292 ******* 2026-02-23 20:55:47.604387 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.604392 | orchestrator | 2026-02-23 20:55:47.604397 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-23 20:55:47.604401 | orchestrator | Monday 23 February 2026 20:49:08 +0000 (0:00:00.617) 0:01:20.909 ******* 2026-02-23 20:55:47.604406 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604410 | orchestrator | 2026-02-23 20:55:47.604415 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-23 20:55:47.604420 | orchestrator | Monday 23 February 2026 20:49:27 +0000 (0:00:19.397) 0:01:40.307 ******* 2026-02-23 20:55:47.604424 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604429 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604433 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604448 | orchestrator | 2026-02-23 20:55:47.604454 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-23 20:55:47.604458 | orchestrator | 2026-02-23 20:55:47.604463 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-23 20:55:47.604469 | orchestrator | Monday 23 February 2026 20:49:27 +0000 (0:00:00.273) 0:01:40.581 ******* 2026-02-23 20:55:47.604474 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.604479 | orchestrator | 2026-02-23 20:55:47.604484 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-23 20:55:47.604490 | orchestrator | Monday 23 February 2026 20:49:28 +0000 (0:00:00.568) 0:01:41.150 ******* 2026-02-23 20:55:47.604495 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604508 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604514 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604525 | orchestrator | 2026-02-23 20:55:47.604530 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-23 20:55:47.604535 | orchestrator | Monday 23 February 2026 20:49:30 +0000 (0:00:01.668) 0:01:42.819 ******* 2026-02-23 20:55:47.604540 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604544 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604549 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604555 | orchestrator | 2026-02-23 20:55:47.604561 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-23 20:55:47.604566 | orchestrator | Monday 23 February 2026 20:49:31 +0000 (0:00:01.834) 0:01:44.653 ******* 2026-02-23 20:55:47.604571 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604595 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604636 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604642 | orchestrator | 2026-02-23 20:55:47.604660 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-23 20:55:47.604665 | orchestrator | Monday 23 February 2026 20:49:32 +0000 (0:00:00.293) 0:01:44.947 ******* 2026-02-23 20:55:47.604671 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-23 20:55:47.604676 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604681 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-23 20:55:47.604686 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604690 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-23 20:55:47.604695 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-23 20:55:47.604699 | orchestrator | 2026-02-23 20:55:47.604707 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-23 20:55:47.604713 | orchestrator | Monday 23 February 2026 20:49:39 +0000 (0:00:06.819) 0:01:51.766 ******* 2026-02-23 20:55:47.604718 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604722 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604727 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604732 | orchestrator | 2026-02-23 20:55:47.604737 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-23 20:55:47.604742 | orchestrator | Monday 23 February 2026 20:49:39 +0000 (0:00:00.381) 0:01:52.147 ******* 2026-02-23 20:55:47.604746 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-23 20:55:47.604750 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604755 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-23 20:55:47.604759 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604763 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-23 20:55:47.604768 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604772 | orchestrator | 2026-02-23 20:55:47.604777 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-23 20:55:47.604781 | orchestrator | Monday 23 February 2026 20:49:40 +0000 (0:00:00.866) 0:01:53.014 ******* 2026-02-23 20:55:47.604786 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604790 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604795 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604799 | orchestrator | 2026-02-23 20:55:47.604804 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-23 20:55:47.604808 | orchestrator | Monday 23 February 2026 20:49:41 +0000 (0:00:00.703) 0:01:53.717 ******* 2026-02-23 20:55:47.604813 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604817 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604823 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604827 | orchestrator | 2026-02-23 20:55:47.604832 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-23 20:55:47.604836 | orchestrator | Monday 23 February 2026 20:49:41 +0000 (0:00:00.936) 0:01:54.654 ******* 2026-02-23 20:55:47.604841 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604845 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604855 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604860 | orchestrator | 2026-02-23 20:55:47.604864 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-23 20:55:47.604869 | orchestrator | Monday 23 February 2026 20:49:43 +0000 (0:00:02.039) 0:01:56.693 ******* 2026-02-23 20:55:47.604874 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604878 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604883 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604887 | orchestrator | 2026-02-23 20:55:47.604891 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-23 20:55:47.604896 | orchestrator | Monday 23 February 2026 20:50:05 +0000 (0:00:21.163) 0:02:17.856 ******* 2026-02-23 20:55:47.604900 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604904 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604910 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604914 | orchestrator | 2026-02-23 20:55:47.604919 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-23 20:55:47.604923 | orchestrator | Monday 23 February 2026 20:50:19 +0000 (0:00:13.884) 0:02:31.741 ******* 2026-02-23 20:55:47.604928 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.604933 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604938 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604943 | orchestrator | 2026-02-23 20:55:47.604947 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-23 20:55:47.604953 | orchestrator | Monday 23 February 2026 20:50:19 +0000 (0:00:00.903) 0:02:32.645 ******* 2026-02-23 20:55:47.604958 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604963 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.604968 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.604973 | orchestrator | 2026-02-23 20:55:47.604978 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-23 20:55:47.604983 | orchestrator | Monday 23 February 2026 20:50:32 +0000 (0:00:12.889) 0:02:45.534 ******* 2026-02-23 20:55:47.604988 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.604993 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.604998 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.605004 | orchestrator | 2026-02-23 20:55:47.605009 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-23 20:55:47.605013 | orchestrator | Monday 23 February 2026 20:50:34 +0000 (0:00:01.289) 0:02:46.824 ******* 2026-02-23 20:55:47.605017 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.605020 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.605023 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.605026 | orchestrator | 2026-02-23 20:55:47.605029 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-23 20:55:47.605032 | orchestrator | 2026-02-23 20:55:47.605035 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-23 20:55:47.605038 | orchestrator | Monday 23 February 2026 20:50:34 +0000 (0:00:00.313) 0:02:47.137 ******* 2026-02-23 20:55:47.605041 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.605045 | orchestrator | 2026-02-23 20:55:47.605058 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-23 20:55:47.605064 | orchestrator | Monday 23 February 2026 20:50:34 +0000 (0:00:00.553) 0:02:47.690 ******* 2026-02-23 20:55:47.605069 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-23 20:55:47.605074 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-23 20:55:47.605079 | orchestrator | 2026-02-23 20:55:47.605084 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-23 20:55:47.605090 | orchestrator | Monday 23 February 2026 20:50:37 +0000 (0:00:02.978) 0:02:50.669 ******* 2026-02-23 20:55:47.605095 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-23 20:55:47.605112 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-23 20:55:47.605116 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-23 20:55:47.605119 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-23 20:55:47.605122 | orchestrator | 2026-02-23 20:55:47.605126 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-23 20:55:47.605129 | orchestrator | Monday 23 February 2026 20:50:44 +0000 (0:00:06.188) 0:02:56.857 ******* 2026-02-23 20:55:47.605132 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:55:47.605135 | orchestrator | 2026-02-23 20:55:47.605139 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-23 20:55:47.605142 | orchestrator | Monday 23 February 2026 20:50:48 +0000 (0:00:04.047) 0:03:00.905 ******* 2026-02-23 20:55:47.605145 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-23 20:55:47.605148 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:55:47.605151 | orchestrator | 2026-02-23 20:55:47.605155 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-23 20:55:47.605158 | orchestrator | Monday 23 February 2026 20:50:52 +0000 (0:00:03.828) 0:03:04.734 ******* 2026-02-23 20:55:47.605161 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:55:47.605164 | orchestrator | 2026-02-23 20:55:47.605167 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-23 20:55:47.605170 | orchestrator | Monday 23 February 2026 20:50:55 +0000 (0:00:02.976) 0:03:07.710 ******* 2026-02-23 20:55:47.605174 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-23 20:55:47.605177 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-23 20:55:47.605180 | orchestrator | 2026-02-23 20:55:47.605183 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-23 20:55:47.605188 | orchestrator | Monday 23 February 2026 20:51:02 +0000 (0:00:07.580) 0:03:15.291 ******* 2026-02-23 20:55:47.605197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605238 | orchestrator | 2026-02-23 20:55:47.605241 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-23 20:55:47.605244 | orchestrator | Monday 23 February 2026 20:51:03 +0000 (0:00:01.289) 0:03:16.581 ******* 2026-02-23 20:55:47.605247 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.605253 | orchestrator | 2026-02-23 20:55:47.605256 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-23 20:55:47.605261 | orchestrator | Monday 23 February 2026 20:51:03 +0000 (0:00:00.118) 0:03:16.699 ******* 2026-02-23 20:55:47.605266 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.605270 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.605275 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.605279 | orchestrator | 2026-02-23 20:55:47.605284 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-23 20:55:47.605288 | orchestrator | Monday 23 February 2026 20:51:04 +0000 (0:00:00.395) 0:03:17.094 ******* 2026-02-23 20:55:47.605299 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-23 20:55:47.605305 | orchestrator | 2026-02-23 20:55:47.605309 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-23 20:55:47.605314 | orchestrator | Monday 23 February 2026 20:51:05 +0000 (0:00:00.717) 0:03:17.812 ******* 2026-02-23 20:55:47.605319 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.605324 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.605330 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.605335 | orchestrator | 2026-02-23 20:55:47.605340 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-23 20:55:47.605344 | orchestrator | Monday 23 February 2026 20:51:05 +0000 (0:00:00.277) 0:03:18.089 ******* 2026-02-23 20:55:47.605347 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.605350 | orchestrator | 2026-02-23 20:55:47.605353 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-23 20:55:47.605356 | orchestrator | Monday 23 February 2026 20:51:05 +0000 (0:00:00.586) 0:03:18.675 ******* 2026-02-23 20:55:47.605360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605394 | orchestrator | 2026-02-23 20:55:47.605398 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-23 20:55:47.605403 | orchestrator | Monday 23 February 2026 20:51:08 +0000 (0:00:02.668) 0:03:21.343 ******* 2026-02-23 20:55:47.605409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.605418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.605424 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.605436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.605442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.605447 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.605453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.605459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.605463 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.605466 | orchestrator | 2026-02-23 20:55:47.605469 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-23 20:55:47.605472 | orchestrator | Monday 23 February 2026 20:51:09 +0000 (0:00:00.571) 0:03:21.915 ******* 2026-02-23 20:55:47.605802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.605818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.605821 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.605827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.605840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.605848 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.605860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.605866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.605871 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.605875 | orchestrator | 2026-02-23 20:55:47.605880 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-23 20:55:47.605885 | orchestrator | Monday 23 February 2026 20:51:09 +0000 (0:00:00.693) 0:03:22.609 ******* 2026-02-23 20:55:47.605890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605933 | orchestrator | 2026-02-23 20:55:47.605938 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-23 20:55:47.605943 | orchestrator | Monday 23 February 2026 20:51:12 +0000 (0:00:02.745) 0:03:25.355 ******* 2026-02-23 20:55:47.605952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.605973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.605992 | orchestrator | 2026-02-23 20:55:47.605995 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-23 20:55:47.605998 | orchestrator | Monday 23 February 2026 20:51:17 +0000 (0:00:05.177) 0:03:30.533 ******* 2026-02-23 20:55:47.606002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.606008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.606011 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.606039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.606043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.606046 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.606056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-23 20:55:47.606059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.606068 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.606071 | orchestrator | 2026-02-23 20:55:47.606074 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-23 20:55:47.606077 | orchestrator | Monday 23 February 2026 20:51:18 +0000 (0:00:00.511) 0:03:31.044 ******* 2026-02-23 20:55:47.606081 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.606084 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.606087 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.606090 | orchestrator | 2026-02-23 20:55:47.606093 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-23 20:55:47.606096 | orchestrator | Monday 23 February 2026 20:51:19 +0000 (0:00:01.550) 0:03:32.594 ******* 2026-02-23 20:55:47.606100 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.606103 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.606106 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.606110 | orchestrator | 2026-02-23 20:55:47.606113 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-23 20:55:47.606116 | orchestrator | Monday 23 February 2026 20:51:20 +0000 (0:00:00.281) 0:03:32.875 ******* 2026-02-23 20:55:47.606119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.606128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.606134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-23 20:55:47.606139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.606145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.606151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.606158 | orchestrator | 2026-02-23 20:55:47.606190 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-23 20:55:47.606197 | orchestrator | Monday 23 February 2026 20:51:22 +0000 (0:00:02.212) 0:03:35.088 ******* 2026-02-23 20:55:47.606202 | orchestrator | 2026-02-23 20:55:47.606207 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-23 20:55:47.606281 | orchestrator | Monday 23 February 2026 20:51:22 +0000 (0:00:00.128) 0:03:35.217 ******* 2026-02-23 20:55:47.606289 | orchestrator | 2026-02-23 20:55:47.606293 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-23 20:55:47.606298 | orchestrator | Monday 23 February 2026 20:51:22 +0000 (0:00:00.118) 0:03:35.336 ******* 2026-02-23 20:55:47.606303 | orchestrator | 2026-02-23 20:55:47.606472 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-23 20:55:47.606476 | orchestrator | Monday 23 February 2026 20:51:22 +0000 (0:00:00.126) 0:03:35.462 ******* 2026-02-23 20:55:47.606484 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.606500 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.606503 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.606506 | orchestrator | 2026-02-23 20:55:47.606510 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-23 20:55:47.606513 | orchestrator | Monday 23 February 2026 20:51:36 +0000 (0:00:13.267) 0:03:48.730 ******* 2026-02-23 20:55:47.606517 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.606520 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.606523 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.606527 | orchestrator | 2026-02-23 20:55:47.606530 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-23 20:55:47.606533 | orchestrator | 2026-02-23 20:55:47.606537 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-23 20:55:47.606540 | orchestrator | Monday 23 February 2026 20:51:45 +0000 (0:00:09.827) 0:03:58.557 ******* 2026-02-23 20:55:47.606544 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.606548 | orchestrator | 2026-02-23 20:55:47.606553 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-23 20:55:47.606558 | orchestrator | Monday 23 February 2026 20:51:46 +0000 (0:00:01.074) 0:03:59.631 ******* 2026-02-23 20:55:47.606563 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.606568 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.606573 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.606578 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.606584 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.606588 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.606593 | orchestrator | 2026-02-23 20:55:47.606598 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-23 20:55:47.606603 | orchestrator | Monday 23 February 2026 20:51:47 +0000 (0:00:00.479) 0:04:00.110 ******* 2026-02-23 20:55:47.606608 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.606613 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.606618 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.606623 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:55:47.606629 | orchestrator | 2026-02-23 20:55:47.606633 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-23 20:55:47.606636 | orchestrator | Monday 23 February 2026 20:51:48 +0000 (0:00:00.855) 0:04:00.966 ******* 2026-02-23 20:55:47.606639 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-23 20:55:47.606642 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-23 20:55:47.606732 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-23 20:55:47.606738 | orchestrator | 2026-02-23 20:55:47.606743 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-23 20:55:47.606747 | orchestrator | Monday 23 February 2026 20:51:48 +0000 (0:00:00.592) 0:04:01.558 ******* 2026-02-23 20:55:47.606752 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-23 20:55:47.606756 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-23 20:55:47.606761 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-23 20:55:47.606765 | orchestrator | 2026-02-23 20:55:47.606770 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-23 20:55:47.606774 | orchestrator | Monday 23 February 2026 20:51:50 +0000 (0:00:01.251) 0:04:02.809 ******* 2026-02-23 20:55:47.606779 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-23 20:55:47.606784 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.606789 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-23 20:55:47.606793 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.606804 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-23 20:55:47.606809 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.606813 | orchestrator | 2026-02-23 20:55:47.606817 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-23 20:55:47.606822 | orchestrator | Monday 23 February 2026 20:51:50 +0000 (0:00:00.511) 0:04:03.321 ******* 2026-02-23 20:55:47.606826 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:55:47.606831 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:55:47.606835 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.606839 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:55:47.606844 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:55:47.606848 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-23 20:55:47.606853 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-23 20:55:47.606939 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.607051 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-23 20:55:47.607059 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-23 20:55:47.607062 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.607066 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-23 20:55:47.607087 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-23 20:55:47.607091 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-23 20:55:47.607094 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-23 20:55:47.607098 | orchestrator | 2026-02-23 20:55:47.607101 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-23 20:55:47.607104 | orchestrator | Monday 23 February 2026 20:51:52 +0000 (0:00:01.948) 0:04:05.270 ******* 2026-02-23 20:55:47.607107 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.607110 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.607113 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.607117 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.607120 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.607123 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.607126 | orchestrator | 2026-02-23 20:55:47.607129 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-23 20:55:47.607132 | orchestrator | Monday 23 February 2026 20:51:53 +0000 (0:00:01.013) 0:04:06.283 ******* 2026-02-23 20:55:47.607135 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.607138 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.607141 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.607146 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.607151 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.607155 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.607160 | orchestrator | 2026-02-23 20:55:47.607164 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-23 20:55:47.607168 | orchestrator | Monday 23 February 2026 20:51:55 +0000 (0:00:01.926) 0:04:08.209 ******* 2026-02-23 20:55:47.607177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607309 | orchestrator | 2026-02-23 20:55:47.607315 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-23 20:55:47.607321 | orchestrator | Monday 23 February 2026 20:51:57 +0000 (0:00:01.903) 0:04:10.113 ******* 2026-02-23 20:55:47.607327 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:55:47.607332 | orchestrator | 2026-02-23 20:55:47.607337 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-23 20:55:47.607342 | orchestrator | Monday 23 February 2026 20:51:58 +0000 (0:00:01.080) 0:04:11.193 ******* 2026-02-23 20:55:47.607363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.607506 | orchestrator | 2026-02-23 20:55:47.607511 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-23 20:55:47.607516 | orchestrator | Monday 23 February 2026 20:52:01 +0000 (0:00:03.250) 0:04:14.444 ******* 2026-02-23 20:55:47.607521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.607527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.607532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.607642 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.607665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.607671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607677 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.607680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.607684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.607701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607709 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.607712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.607715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607718 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.607722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.607725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607728 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.607732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.607735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607738 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.607741 | orchestrator | 2026-02-23 20:55:47.607744 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-23 20:55:47.607765 | orchestrator | Monday 23 February 2026 20:52:02 +0000 (0:00:01.168) 0:04:15.612 ******* 2026-02-23 20:55:47.607774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.607780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.607785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607790 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.607795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.607800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.607823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607834 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.607839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.607844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.607850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607854 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.607857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.607860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607867 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.607884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.607888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607891 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.607894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.607897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.607901 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.607904 | orchestrator | 2026-02-23 20:55:47.607907 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-23 20:55:47.607910 | orchestrator | Monday 23 February 2026 20:52:04 +0000 (0:00:01.865) 0:04:17.477 ******* 2026-02-23 20:55:47.607913 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.607917 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.607920 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.607923 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-23 20:55:47.607926 | orchestrator | 2026-02-23 20:55:47.607929 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-23 20:55:47.607933 | orchestrator | Monday 23 February 2026 20:52:05 +0000 (0:00:00.971) 0:04:18.449 ******* 2026-02-23 20:55:47.607936 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:55:47.607940 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-23 20:55:47.607943 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-23 20:55:47.607948 | orchestrator | 2026-02-23 20:55:47.607952 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-23 20:55:47.607955 | orchestrator | Monday 23 February 2026 20:52:06 +0000 (0:00:00.928) 0:04:19.377 ******* 2026-02-23 20:55:47.607958 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:55:47.607961 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-23 20:55:47.607964 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-23 20:55:47.607967 | orchestrator | 2026-02-23 20:55:47.607971 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-23 20:55:47.607974 | orchestrator | Monday 23 February 2026 20:52:07 +0000 (0:00:00.968) 0:04:20.346 ******* 2026-02-23 20:55:47.607977 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:55:47.607980 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:55:47.607983 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:55:47.607986 | orchestrator | 2026-02-23 20:55:47.607989 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-23 20:55:47.607993 | orchestrator | Monday 23 February 2026 20:52:08 +0000 (0:00:00.739) 0:04:21.086 ******* 2026-02-23 20:55:47.607996 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:55:47.607999 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:55:47.608002 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:55:47.608005 | orchestrator | 2026-02-23 20:55:47.608008 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-23 20:55:47.608011 | orchestrator | Monday 23 February 2026 20:52:09 +0000 (0:00:00.752) 0:04:21.839 ******* 2026-02-23 20:55:47.608015 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-23 20:55:47.608018 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-23 20:55:47.608031 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-23 20:55:47.608035 | orchestrator | 2026-02-23 20:55:47.608038 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-23 20:55:47.608041 | orchestrator | Monday 23 February 2026 20:52:10 +0000 (0:00:01.187) 0:04:23.026 ******* 2026-02-23 20:55:47.608044 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-23 20:55:47.608047 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-23 20:55:47.608050 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-23 20:55:47.608054 | orchestrator | 2026-02-23 20:55:47.608057 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-23 20:55:47.608060 | orchestrator | Monday 23 February 2026 20:52:11 +0000 (0:00:01.100) 0:04:24.127 ******* 2026-02-23 20:55:47.608063 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-23 20:55:47.608066 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-23 20:55:47.608070 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-23 20:55:47.608073 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-23 20:55:47.608076 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-23 20:55:47.608079 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-23 20:55:47.608082 | orchestrator | 2026-02-23 20:55:47.608085 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-23 20:55:47.608088 | orchestrator | Monday 23 February 2026 20:52:15 +0000 (0:00:03.576) 0:04:27.704 ******* 2026-02-23 20:55:47.608091 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608094 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608097 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608100 | orchestrator | 2026-02-23 20:55:47.608103 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-23 20:55:47.608106 | orchestrator | Monday 23 February 2026 20:52:15 +0000 (0:00:00.572) 0:04:28.276 ******* 2026-02-23 20:55:47.608110 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608113 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608116 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608123 | orchestrator | 2026-02-23 20:55:47.608126 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-23 20:55:47.608129 | orchestrator | Monday 23 February 2026 20:52:15 +0000 (0:00:00.308) 0:04:28.585 ******* 2026-02-23 20:55:47.608133 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.608136 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.608139 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.608142 | orchestrator | 2026-02-23 20:55:47.608147 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-23 20:55:47.608155 | orchestrator | Monday 23 February 2026 20:52:17 +0000 (0:00:01.148) 0:04:29.734 ******* 2026-02-23 20:55:47.608161 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-23 20:55:47.608166 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-23 20:55:47.608171 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-23 20:55:47.608176 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-23 20:55:47.608181 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-23 20:55:47.608186 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-23 20:55:47.608191 | orchestrator | 2026-02-23 20:55:47.608196 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-23 20:55:47.608200 | orchestrator | Monday 23 February 2026 20:52:19 +0000 (0:00:02.894) 0:04:32.629 ******* 2026-02-23 20:55:47.608205 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:55:47.608210 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:55:47.608215 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:55:47.608220 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-23 20:55:47.608225 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.608230 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-23 20:55:47.608236 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.608241 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-23 20:55:47.608246 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.608252 | orchestrator | 2026-02-23 20:55:47.608257 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-02-23 20:55:47.608262 | orchestrator | Monday 23 February 2026 20:52:22 +0000 (0:00:02.890) 0:04:35.519 ******* 2026-02-23 20:55:47.608267 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608270 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608273 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608276 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-02-23 20:55:47.608279 | orchestrator | 2026-02-23 20:55:47.608283 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-02-23 20:55:47.608287 | orchestrator | Monday 23 February 2026 20:52:24 +0000 (0:00:01.707) 0:04:37.227 ******* 2026-02-23 20:55:47.608290 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:55:47.608294 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-23 20:55:47.608297 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-23 20:55:47.608300 | orchestrator | 2026-02-23 20:55:47.608321 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-02-23 20:55:47.608325 | orchestrator | Monday 23 February 2026 20:52:25 +0000 (0:00:01.306) 0:04:38.533 ******* 2026-02-23 20:55:47.608328 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608332 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608339 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608342 | orchestrator | 2026-02-23 20:55:47.608345 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-23 20:55:47.608349 | orchestrator | Monday 23 February 2026 20:52:26 +0000 (0:00:00.323) 0:04:38.857 ******* 2026-02-23 20:55:47.608353 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608356 | orchestrator | 2026-02-23 20:55:47.608359 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-23 20:55:47.608363 | orchestrator | Monday 23 February 2026 20:52:26 +0000 (0:00:00.141) 0:04:38.998 ******* 2026-02-23 20:55:47.608366 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608370 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608373 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608377 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608380 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608383 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608387 | orchestrator | 2026-02-23 20:55:47.608390 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-23 20:55:47.608394 | orchestrator | Monday 23 February 2026 20:52:27 +0000 (0:00:00.770) 0:04:39.768 ******* 2026-02-23 20:55:47.608397 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-23 20:55:47.608400 | orchestrator | 2026-02-23 20:55:47.608404 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-23 20:55:47.608407 | orchestrator | Monday 23 February 2026 20:52:27 +0000 (0:00:00.746) 0:04:40.515 ******* 2026-02-23 20:55:47.608410 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608414 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608417 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608420 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608424 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608427 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608430 | orchestrator | 2026-02-23 20:55:47.608434 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-23 20:55:47.608437 | orchestrator | Monday 23 February 2026 20:52:28 +0000 (0:00:00.580) 0:04:41.096 ******* 2026-02-23 20:55:47.608441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608518 | orchestrator | 2026-02-23 20:55:47.608521 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-23 20:55:47.608525 | orchestrator | Monday 23 February 2026 20:52:32 +0000 (0:00:03.936) 0:04:45.032 ******* 2026-02-23 20:55:47.608533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.608537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.608541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.608544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.608548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.608554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.608561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.608601 | orchestrator | 2026-02-23 20:55:47.608605 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-23 20:55:47.608608 | orchestrator | Monday 23 February 2026 20:52:38 +0000 (0:00:05.853) 0:04:50.885 ******* 2026-02-23 20:55:47.608612 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608615 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608618 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608622 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608625 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608628 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608631 | orchestrator | 2026-02-23 20:55:47.608634 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-23 20:55:47.608640 | orchestrator | Monday 23 February 2026 20:52:39 +0000 (0:00:01.791) 0:04:52.677 ******* 2026-02-23 20:55:47.608643 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-23 20:55:47.608743 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-23 20:55:47.608748 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-23 20:55:47.608751 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-23 20:55:47.608754 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-23 20:55:47.608758 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608761 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-23 20:55:47.608764 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608767 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-23 20:55:47.608770 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608773 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-23 20:55:47.608776 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-23 20:55:47.608779 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-23 20:55:47.608782 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-23 20:55:47.608785 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-23 20:55:47.608788 | orchestrator | 2026-02-23 20:55:47.608792 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-23 20:55:47.608795 | orchestrator | Monday 23 February 2026 20:52:43 +0000 (0:00:03.434) 0:04:56.111 ******* 2026-02-23 20:55:47.608798 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.608801 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.608804 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.608807 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608810 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608813 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608816 | orchestrator | 2026-02-23 20:55:47.608819 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-23 20:55:47.608822 | orchestrator | Monday 23 February 2026 20:52:44 +0000 (0:00:00.763) 0:04:56.875 ******* 2026-02-23 20:55:47.608833 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-23 20:55:47.608840 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-23 20:55:47.608844 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-23 20:55:47.608847 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-23 20:55:47.608850 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-23 20:55:47.608853 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-23 20:55:47.608856 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-23 20:55:47.608859 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-23 20:55:47.608862 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-23 20:55:47.608870 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-23 20:55:47.608873 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608876 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-23 20:55:47.608879 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-23 20:55:47.608882 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608885 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-23 20:55:47.608888 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608891 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-23 20:55:47.608894 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-23 20:55:47.608897 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-23 20:55:47.608900 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-23 20:55:47.608903 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-23 20:55:47.608906 | orchestrator | 2026-02-23 20:55:47.608909 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-23 20:55:47.608913 | orchestrator | Monday 23 February 2026 20:52:49 +0000 (0:00:04.985) 0:05:01.860 ******* 2026-02-23 20:55:47.608916 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-23 20:55:47.608919 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-23 20:55:47.608922 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-23 20:55:47.608925 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-23 20:55:47.608928 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-23 20:55:47.608931 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-23 20:55:47.608936 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-23 20:55:47.608939 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-23 20:55:47.608942 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-23 20:55:47.608945 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-23 20:55:47.608948 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-23 20:55:47.608951 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-23 20:55:47.608954 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.608957 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-23 20:55:47.608960 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-23 20:55:47.608963 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-23 20:55:47.608966 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-23 20:55:47.608969 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.608973 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-23 20:55:47.608976 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.608982 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-23 20:55:47.608988 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-23 20:55:47.608991 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-23 20:55:47.608994 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-23 20:55:47.608997 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-23 20:55:47.609000 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-23 20:55:47.609003 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-23 20:55:47.609006 | orchestrator | 2026-02-23 20:55:47.609010 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-23 20:55:47.609013 | orchestrator | Monday 23 February 2026 20:52:56 +0000 (0:00:07.057) 0:05:08.917 ******* 2026-02-23 20:55:47.609016 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609019 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609022 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609025 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609028 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609031 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609034 | orchestrator | 2026-02-23 20:55:47.609038 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-23 20:55:47.609041 | orchestrator | Monday 23 February 2026 20:52:56 +0000 (0:00:00.636) 0:05:09.554 ******* 2026-02-23 20:55:47.609044 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609047 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609050 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609053 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609056 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609059 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609062 | orchestrator | 2026-02-23 20:55:47.609066 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-23 20:55:47.609069 | orchestrator | Monday 23 February 2026 20:52:57 +0000 (0:00:00.519) 0:05:10.074 ******* 2026-02-23 20:55:47.609072 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609075 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609078 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609081 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.609084 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.609087 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.609090 | orchestrator | 2026-02-23 20:55:47.609094 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-23 20:55:47.609097 | orchestrator | Monday 23 February 2026 20:52:59 +0000 (0:00:01.726) 0:05:11.800 ******* 2026-02-23 20:55:47.609101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.609104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.609114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.609118 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.609124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.609128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.609131 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-23 20:55:47.609140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-23 20:55:47.609147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.609150 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.609157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.609160 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.609167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.609172 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-23 20:55:47.609182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-23 20:55:47.609186 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609189 | orchestrator | 2026-02-23 20:55:47.609192 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-23 20:55:47.609195 | orchestrator | Monday 23 February 2026 20:53:00 +0000 (0:00:01.486) 0:05:13.287 ******* 2026-02-23 20:55:47.609198 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-23 20:55:47.609201 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609204 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609208 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-23 20:55:47.609211 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609214 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609217 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-23 20:55:47.609220 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609223 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609226 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-23 20:55:47.609230 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609233 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609236 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-23 20:55:47.609239 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609242 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609245 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-23 20:55:47.609248 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609251 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609255 | orchestrator | 2026-02-23 20:55:47.609258 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-23 20:55:47.609261 | orchestrator | Monday 23 February 2026 20:53:01 +0000 (0:00:00.718) 0:05:14.006 ******* 2026-02-23 20:55:47.609264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-23 20:55:47.609329 | orchestrator | 2026-02-23 20:55:47.609332 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-23 20:55:47.609335 | orchestrator | Monday 23 February 2026 20:53:03 +0000 (0:00:02.444) 0:05:16.450 ******* 2026-02-23 20:55:47.609338 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609341 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609344 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609347 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609351 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609354 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609357 | orchestrator | 2026-02-23 20:55:47.609360 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-23 20:55:47.609363 | orchestrator | Monday 23 February 2026 20:53:04 +0000 (0:00:00.667) 0:05:17.118 ******* 2026-02-23 20:55:47.609366 | orchestrator | 2026-02-23 20:55:47.609369 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-23 20:55:47.609376 | orchestrator | Monday 23 February 2026 20:53:04 +0000 (0:00:00.120) 0:05:17.239 ******* 2026-02-23 20:55:47.609379 | orchestrator | 2026-02-23 20:55:47.609382 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-23 20:55:47.609385 | orchestrator | Monday 23 February 2026 20:53:04 +0000 (0:00:00.119) 0:05:17.358 ******* 2026-02-23 20:55:47.609388 | orchestrator | 2026-02-23 20:55:47.609391 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-23 20:55:47.609395 | orchestrator | Monday 23 February 2026 20:53:04 +0000 (0:00:00.118) 0:05:17.477 ******* 2026-02-23 20:55:47.609398 | orchestrator | 2026-02-23 20:55:47.609401 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-23 20:55:47.609404 | orchestrator | Monday 23 February 2026 20:53:04 +0000 (0:00:00.216) 0:05:17.693 ******* 2026-02-23 20:55:47.609407 | orchestrator | 2026-02-23 20:55:47.609410 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-23 20:55:47.609416 | orchestrator | Monday 23 February 2026 20:53:05 +0000 (0:00:00.133) 0:05:17.827 ******* 2026-02-23 20:55:47.609423 | orchestrator | 2026-02-23 20:55:47.609426 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-23 20:55:47.609429 | orchestrator | Monday 23 February 2026 20:53:05 +0000 (0:00:00.123) 0:05:17.950 ******* 2026-02-23 20:55:47.609432 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.609435 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.609438 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.609441 | orchestrator | 2026-02-23 20:55:47.609445 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-23 20:55:47.609448 | orchestrator | Monday 23 February 2026 20:53:11 +0000 (0:00:06.161) 0:05:24.112 ******* 2026-02-23 20:55:47.609451 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.609454 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.609457 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.609460 | orchestrator | 2026-02-23 20:55:47.609463 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-23 20:55:47.609466 | orchestrator | Monday 23 February 2026 20:53:27 +0000 (0:00:16.164) 0:05:40.276 ******* 2026-02-23 20:55:47.609469 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.609473 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.609476 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.609479 | orchestrator | 2026-02-23 20:55:47.609482 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-23 20:55:47.609485 | orchestrator | Monday 23 February 2026 20:53:42 +0000 (0:00:15.109) 0:05:55.386 ******* 2026-02-23 20:55:47.609488 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.609491 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.609494 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.609497 | orchestrator | 2026-02-23 20:55:47.609501 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-23 20:55:47.609504 | orchestrator | Monday 23 February 2026 20:54:07 +0000 (0:00:24.852) 0:06:20.238 ******* 2026-02-23 20:55:47.609507 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-23 20:55:47.609510 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-23 20:55:47.609513 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-23 20:55:47.609516 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.609519 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.609522 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.609526 | orchestrator | 2026-02-23 20:55:47.609529 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-23 20:55:47.609532 | orchestrator | Monday 23 February 2026 20:54:13 +0000 (0:00:06.033) 0:06:26.271 ******* 2026-02-23 20:55:47.609535 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.609538 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.609541 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.609544 | orchestrator | 2026-02-23 20:55:47.609547 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-23 20:55:47.609550 | orchestrator | Monday 23 February 2026 20:54:14 +0000 (0:00:00.653) 0:06:26.924 ******* 2026-02-23 20:55:47.609554 | orchestrator | changed: [testbed-node-4] 2026-02-23 20:55:47.609557 | orchestrator | changed: [testbed-node-3] 2026-02-23 20:55:47.609560 | orchestrator | changed: [testbed-node-5] 2026-02-23 20:55:47.609563 | orchestrator | 2026-02-23 20:55:47.609566 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-23 20:55:47.609569 | orchestrator | Monday 23 February 2026 20:54:37 +0000 (0:00:22.983) 0:06:49.908 ******* 2026-02-23 20:55:47.609572 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609575 | orchestrator | 2026-02-23 20:55:47.609580 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-23 20:55:47.609583 | orchestrator | Monday 23 February 2026 20:54:37 +0000 (0:00:00.121) 0:06:50.030 ******* 2026-02-23 20:55:47.609586 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609590 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609593 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609596 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609599 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609602 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-23 20:55:47.609605 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:55:47.609608 | orchestrator | 2026-02-23 20:55:47.609611 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-23 20:55:47.609615 | orchestrator | Monday 23 February 2026 20:54:58 +0000 (0:00:21.504) 0:07:11.534 ******* 2026-02-23 20:55:47.609618 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609621 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609624 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609627 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609630 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609637 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609640 | orchestrator | 2026-02-23 20:55:47.609643 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-23 20:55:47.609671 | orchestrator | Monday 23 February 2026 20:55:07 +0000 (0:00:08.666) 0:07:20.200 ******* 2026-02-23 20:55:47.609677 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609680 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609683 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609686 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609689 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609692 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-23 20:55:47.609696 | orchestrator | 2026-02-23 20:55:47.609699 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-23 20:55:47.609702 | orchestrator | Monday 23 February 2026 20:55:11 +0000 (0:00:04.172) 0:07:24.372 ******* 2026-02-23 20:55:47.609705 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:55:47.609708 | orchestrator | 2026-02-23 20:55:47.609711 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-23 20:55:47.609714 | orchestrator | Monday 23 February 2026 20:55:25 +0000 (0:00:13.490) 0:07:37.863 ******* 2026-02-23 20:55:47.609717 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:55:47.609721 | orchestrator | 2026-02-23 20:55:47.609724 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-23 20:55:47.609727 | orchestrator | Monday 23 February 2026 20:55:26 +0000 (0:00:01.410) 0:07:39.274 ******* 2026-02-23 20:55:47.609730 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609733 | orchestrator | 2026-02-23 20:55:47.609737 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-23 20:55:47.609741 | orchestrator | Monday 23 February 2026 20:55:27 +0000 (0:00:01.301) 0:07:40.575 ******* 2026-02-23 20:55:47.609747 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-23 20:55:47.609750 | orchestrator | 2026-02-23 20:55:47.609753 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-23 20:55:47.609756 | orchestrator | Monday 23 February 2026 20:55:39 +0000 (0:00:11.292) 0:07:51.867 ******* 2026-02-23 20:55:47.609759 | orchestrator | ok: [testbed-node-3] 2026-02-23 20:55:47.609762 | orchestrator | ok: [testbed-node-4] 2026-02-23 20:55:47.609766 | orchestrator | ok: [testbed-node-5] 2026-02-23 20:55:47.609769 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:55:47.609772 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:55:47.609775 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:55:47.609781 | orchestrator | 2026-02-23 20:55:47.609784 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-23 20:55:47.609787 | orchestrator | 2026-02-23 20:55:47.609790 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-23 20:55:47.609793 | orchestrator | Monday 23 February 2026 20:55:40 +0000 (0:00:01.627) 0:07:53.495 ******* 2026-02-23 20:55:47.609796 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:55:47.609799 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:55:47.609802 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:55:47.609805 | orchestrator | 2026-02-23 20:55:47.609809 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-23 20:55:47.609812 | orchestrator | 2026-02-23 20:55:47.609815 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-23 20:55:47.609818 | orchestrator | Monday 23 February 2026 20:55:41 +0000 (0:00:01.128) 0:07:54.624 ******* 2026-02-23 20:55:47.609821 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609824 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609827 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609830 | orchestrator | 2026-02-23 20:55:47.609833 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-23 20:55:47.609837 | orchestrator | 2026-02-23 20:55:47.609840 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-23 20:55:47.609843 | orchestrator | Monday 23 February 2026 20:55:42 +0000 (0:00:00.505) 0:07:55.130 ******* 2026-02-23 20:55:47.609846 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-23 20:55:47.609849 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-23 20:55:47.609852 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609855 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-23 20:55:47.609859 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-23 20:55:47.609862 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-23 20:55:47.609865 | orchestrator | skipping: [testbed-node-3] 2026-02-23 20:55:47.609868 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-23 20:55:47.609871 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-23 20:55:47.609874 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609877 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-23 20:55:47.609880 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-23 20:55:47.609883 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-23 20:55:47.609886 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-23 20:55:47.609889 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-23 20:55:47.609892 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609896 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-23 20:55:47.609899 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-23 20:55:47.609902 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-23 20:55:47.609905 | orchestrator | skipping: [testbed-node-4] 2026-02-23 20:55:47.609908 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-23 20:55:47.609911 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-23 20:55:47.609918 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609922 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-23 20:55:47.609925 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-23 20:55:47.609928 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-23 20:55:47.609931 | orchestrator | skipping: [testbed-node-5] 2026-02-23 20:55:47.609936 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-23 20:55:47.609939 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-23 20:55:47.609942 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609945 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-23 20:55:47.609948 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-23 20:55:47.609951 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-23 20:55:47.609954 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.609958 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.609961 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-23 20:55:47.609964 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-23 20:55:47.609967 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-23 20:55:47.609970 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-23 20:55:47.609973 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-23 20:55:47.609976 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-23 20:55:47.609979 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.609982 | orchestrator | 2026-02-23 20:55:47.609985 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-23 20:55:47.609988 | orchestrator | 2026-02-23 20:55:47.609991 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-23 20:55:47.609995 | orchestrator | Monday 23 February 2026 20:55:43 +0000 (0:00:01.283) 0:07:56.413 ******* 2026-02-23 20:55:47.609998 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-23 20:55:47.610001 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-23 20:55:47.610004 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.610007 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-23 20:55:47.610010 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-23 20:55:47.610033 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.610036 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-23 20:55:47.610039 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-23 20:55:47.610042 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.610045 | orchestrator | 2026-02-23 20:55:47.610048 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-23 20:55:47.610052 | orchestrator | 2026-02-23 20:55:47.610055 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-23 20:55:47.610058 | orchestrator | Monday 23 February 2026 20:55:44 +0000 (0:00:00.859) 0:07:57.272 ******* 2026-02-23 20:55:47.610061 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.610064 | orchestrator | 2026-02-23 20:55:47.610067 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-23 20:55:47.610070 | orchestrator | 2026-02-23 20:55:47.610074 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-23 20:55:47.610077 | orchestrator | Monday 23 February 2026 20:55:45 +0000 (0:00:00.752) 0:07:58.024 ******* 2026-02-23 20:55:47.610080 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:55:47.610083 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:55:47.610086 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:55:47.610089 | orchestrator | 2026-02-23 20:55:47.610092 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:55:47.610095 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:55:47.610099 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-02-23 20:55:47.610102 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-02-23 20:55:47.610108 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-02-23 20:55:47.610111 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-23 20:55:47.610114 | orchestrator | testbed-node-4 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-23 20:55:47.610117 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-23 20:55:47.610120 | orchestrator | 2026-02-23 20:55:47.610123 | orchestrator | 2026-02-23 20:55:47.610127 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:55:47.610130 | orchestrator | Monday 23 February 2026 20:55:45 +0000 (0:00:00.632) 0:07:58.657 ******* 2026-02-23 20:55:47.610133 | orchestrator | =============================================================================== 2026-02-23 20:55:47.610136 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.67s 2026-02-23 20:55:47.610143 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.85s 2026-02-23 20:55:47.610147 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.98s 2026-02-23 20:55:47.610150 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.50s 2026-02-23 20:55:47.610153 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.16s 2026-02-23 20:55:47.610156 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.40s 2026-02-23 20:55:47.610159 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.16s 2026-02-23 20:55:47.610162 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.11s 2026-02-23 20:55:47.610166 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.54s 2026-02-23 20:55:47.610169 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.89s 2026-02-23 20:55:47.610172 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.49s 2026-02-23 20:55:47.610175 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 13.27s 2026-02-23 20:55:47.610178 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.89s 2026-02-23 20:55:47.610181 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.54s 2026-02-23 20:55:47.610184 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.29s 2026-02-23 20:55:47.610188 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.83s 2026-02-23 20:55:47.610191 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.67s 2026-02-23 20:55:47.610194 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.58s 2026-02-23 20:55:47.610197 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.06s 2026-02-23 20:55:47.610200 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 6.82s 2026-02-23 20:55:47.610203 | orchestrator | 2026-02-23 20:55:47 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:50.637101 | orchestrator | 2026-02-23 20:55:50 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:50.637147 | orchestrator | 2026-02-23 20:55:50 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:53.687251 | orchestrator | 2026-02-23 20:55:53 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:53.687300 | orchestrator | 2026-02-23 20:55:53 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:56.737434 | orchestrator | 2026-02-23 20:55:56 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:56.737483 | orchestrator | 2026-02-23 20:55:56 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:55:59.790978 | orchestrator | 2026-02-23 20:55:59 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:55:59.791040 | orchestrator | 2026-02-23 20:55:59 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:56:02.836388 | orchestrator | 2026-02-23 20:56:02 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:56:02.836432 | orchestrator | 2026-02-23 20:56:02 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:56:05.896277 | orchestrator | 2026-02-23 20:56:05 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:56:05.896332 | orchestrator | 2026-02-23 20:56:05 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:56:08.938240 | orchestrator | 2026-02-23 20:56:08 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state STARTED 2026-02-23 20:56:08.938298 | orchestrator | 2026-02-23 20:56:08 | INFO  | Wait 1 second(s) until the next check 2026-02-23 20:56:11.992500 | orchestrator | 2026-02-23 20:56:11 | INFO  | Task e5c92fd7-3dd1-41f9-85de-9196abaaf9e7 is in state SUCCESS 2026-02-23 20:56:11.993184 | orchestrator | 2026-02-23 20:56:11.993221 | orchestrator | 2026-02-23 20:56:11.993229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 20:56:11.993236 | orchestrator | 2026-02-23 20:56:11.993243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 20:56:11.993249 | orchestrator | Monday 23 February 2026 20:51:46 +0000 (0:00:00.233) 0:00:00.233 ******* 2026-02-23 20:56:11.993254 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.993259 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:56:11.993262 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:56:11.993266 | orchestrator | 2026-02-23 20:56:11.993269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 20:56:11.993273 | orchestrator | Monday 23 February 2026 20:51:47 +0000 (0:00:00.244) 0:00:00.477 ******* 2026-02-23 20:56:11.993277 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-23 20:56:11.993281 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-23 20:56:11.993284 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-23 20:56:11.993288 | orchestrator | 2026-02-23 20:56:11.993292 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-23 20:56:11.993295 | orchestrator | 2026-02-23 20:56:11.993299 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-23 20:56:11.993315 | orchestrator | Monday 23 February 2026 20:51:47 +0000 (0:00:00.363) 0:00:00.840 ******* 2026-02-23 20:56:11.993319 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:56:11.993323 | orchestrator | 2026-02-23 20:56:11.993327 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-23 20:56:11.993331 | orchestrator | Monday 23 February 2026 20:51:48 +0000 (0:00:00.509) 0:00:01.349 ******* 2026-02-23 20:56:11.993335 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-23 20:56:11.993338 | orchestrator | 2026-02-23 20:56:11.993342 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-23 20:56:11.993346 | orchestrator | Monday 23 February 2026 20:51:51 +0000 (0:00:03.534) 0:00:04.884 ******* 2026-02-23 20:56:11.993349 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-23 20:56:11.993353 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-23 20:56:11.993357 | orchestrator | 2026-02-23 20:56:11.993370 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-23 20:56:11.993374 | orchestrator | Monday 23 February 2026 20:51:57 +0000 (0:00:06.161) 0:00:11.045 ******* 2026-02-23 20:56:11.993378 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-23 20:56:11.993419 | orchestrator | 2026-02-23 20:56:11.993423 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-23 20:56:11.993427 | orchestrator | Monday 23 February 2026 20:52:01 +0000 (0:00:03.261) 0:00:14.307 ******* 2026-02-23 20:56:11.993430 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-23 20:56:11.993434 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-23 20:56:11.993438 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-23 20:56:11.993442 | orchestrator | 2026-02-23 20:56:11.993445 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-23 20:56:11.993455 | orchestrator | Monday 23 February 2026 20:52:08 +0000 (0:00:07.404) 0:00:21.711 ******* 2026-02-23 20:56:11.993459 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-23 20:56:11.993463 | orchestrator | 2026-02-23 20:56:11.993466 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-23 20:56:11.993470 | orchestrator | Monday 23 February 2026 20:52:11 +0000 (0:00:03.232) 0:00:24.943 ******* 2026-02-23 20:56:11.993474 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-23 20:56:11.993650 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-23 20:56:11.993657 | orchestrator | 2026-02-23 20:56:11.993663 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-23 20:56:11.993669 | orchestrator | Monday 23 February 2026 20:52:17 +0000 (0:00:06.034) 0:00:30.977 ******* 2026-02-23 20:56:11.993675 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-23 20:56:11.993681 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-23 20:56:11.993685 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-23 20:56:11.993689 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-23 20:56:11.993693 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-23 20:56:11.993701 | orchestrator | 2026-02-23 20:56:11.993705 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-23 20:56:11.993709 | orchestrator | Monday 23 February 2026 20:52:31 +0000 (0:00:13.412) 0:00:44.390 ******* 2026-02-23 20:56:11.993741 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:56:11.993746 | orchestrator | 2026-02-23 20:56:11.993755 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-23 20:56:11.993764 | orchestrator | Monday 23 February 2026 20:52:31 +0000 (0:00:00.639) 0:00:45.030 ******* 2026-02-23 20:56:11.993767 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.993771 | orchestrator | 2026-02-23 20:56:11.993775 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-23 20:56:11.993779 | orchestrator | Monday 23 February 2026 20:52:36 +0000 (0:00:04.802) 0:00:49.832 ******* 2026-02-23 20:56:11.993782 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.993786 | orchestrator | 2026-02-23 20:56:11.993790 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-23 20:56:11.993830 | orchestrator | Monday 23 February 2026 20:52:40 +0000 (0:00:04.265) 0:00:54.098 ******* 2026-02-23 20:56:11.993838 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.993841 | orchestrator | 2026-02-23 20:56:11.993846 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-23 20:56:11.993850 | orchestrator | Monday 23 February 2026 20:52:44 +0000 (0:00:03.364) 0:00:57.462 ******* 2026-02-23 20:56:11.993854 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-23 20:56:11.993858 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-23 20:56:11.993868 | orchestrator | 2026-02-23 20:56:11.993872 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-23 20:56:11.993876 | orchestrator | Monday 23 February 2026 20:52:54 +0000 (0:00:09.875) 0:01:07.338 ******* 2026-02-23 20:56:11.993879 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-23 20:56:11.993883 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-23 20:56:11.993892 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-23 20:56:11.993897 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-23 20:56:11.993901 | orchestrator | 2026-02-23 20:56:11.993905 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-23 20:56:11.993909 | orchestrator | Monday 23 February 2026 20:53:08 +0000 (0:00:14.661) 0:01:21.999 ******* 2026-02-23 20:56:11.993913 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.993917 | orchestrator | 2026-02-23 20:56:11.993920 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-23 20:56:11.993924 | orchestrator | Monday 23 February 2026 20:53:12 +0000 (0:00:04.091) 0:01:26.091 ******* 2026-02-23 20:56:11.993928 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.993932 | orchestrator | 2026-02-23 20:56:11.993936 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-23 20:56:11.993939 | orchestrator | Monday 23 February 2026 20:53:18 +0000 (0:00:05.217) 0:01:31.308 ******* 2026-02-23 20:56:11.993943 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.993947 | orchestrator | 2026-02-23 20:56:11.993951 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-23 20:56:11.993954 | orchestrator | Monday 23 February 2026 20:53:18 +0000 (0:00:00.208) 0:01:31.516 ******* 2026-02-23 20:56:11.993958 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.993962 | orchestrator | 2026-02-23 20:56:11.993966 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-23 20:56:11.993969 | orchestrator | Monday 23 February 2026 20:53:22 +0000 (0:00:04.078) 0:01:35.594 ******* 2026-02-23 20:56:11.993973 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:56:11.993977 | orchestrator | 2026-02-23 20:56:11.993981 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-23 20:56:11.993985 | orchestrator | Monday 23 February 2026 20:53:23 +0000 (0:00:00.897) 0:01:36.492 ******* 2026-02-23 20:56:11.993989 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.993993 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.993997 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994047 | orchestrator | 2026-02-23 20:56:11.994052 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-23 20:56:11.994055 | orchestrator | Monday 23 February 2026 20:53:27 +0000 (0:00:04.654) 0:01:41.146 ******* 2026-02-23 20:56:11.994059 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.994063 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.994068 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994132 | orchestrator | 2026-02-23 20:56:11.994294 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-23 20:56:11.994307 | orchestrator | Monday 23 February 2026 20:53:31 +0000 (0:00:03.844) 0:01:44.991 ******* 2026-02-23 20:56:11.994311 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.994315 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.994319 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994323 | orchestrator | 2026-02-23 20:56:11.994327 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-23 20:56:11.994335 | orchestrator | Monday 23 February 2026 20:53:32 +0000 (0:00:00.678) 0:01:45.670 ******* 2026-02-23 20:56:11.994339 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994343 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:56:11.994347 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:56:11.994352 | orchestrator | 2026-02-23 20:56:11.994358 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-23 20:56:11.994364 | orchestrator | Monday 23 February 2026 20:53:34 +0000 (0:00:01.755) 0:01:47.425 ******* 2026-02-23 20:56:11.994369 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.994378 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.994385 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994390 | orchestrator | 2026-02-23 20:56:11.994397 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-23 20:56:11.994403 | orchestrator | Monday 23 February 2026 20:53:35 +0000 (0:00:01.187) 0:01:48.613 ******* 2026-02-23 20:56:11.994408 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.994414 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.994420 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994426 | orchestrator | 2026-02-23 20:56:11.994432 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-23 20:56:11.994438 | orchestrator | Monday 23 February 2026 20:53:36 +0000 (0:00:01.028) 0:01:49.642 ******* 2026-02-23 20:56:11.994444 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.994450 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.994456 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994462 | orchestrator | 2026-02-23 20:56:11.994491 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-23 20:56:11.994496 | orchestrator | Monday 23 February 2026 20:53:38 +0000 (0:00:01.792) 0:01:51.434 ******* 2026-02-23 20:56:11.994499 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.994503 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.994507 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.994510 | orchestrator | 2026-02-23 20:56:11.994514 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-23 20:56:11.994518 | orchestrator | Monday 23 February 2026 20:53:39 +0000 (0:00:01.672) 0:01:53.107 ******* 2026-02-23 20:56:11.994521 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994525 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:56:11.994529 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:56:11.994532 | orchestrator | 2026-02-23 20:56:11.994536 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-23 20:56:11.994540 | orchestrator | Monday 23 February 2026 20:53:40 +0000 (0:00:00.614) 0:01:53.722 ******* 2026-02-23 20:56:11.994544 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994547 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:56:11.994551 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:56:11.994554 | orchestrator | 2026-02-23 20:56:11.994558 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-23 20:56:11.994566 | orchestrator | Monday 23 February 2026 20:53:42 +0000 (0:00:02.416) 0:01:56.139 ******* 2026-02-23 20:56:11.994570 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:56:11.994573 | orchestrator | 2026-02-23 20:56:11.994577 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-23 20:56:11.994581 | orchestrator | Monday 23 February 2026 20:53:43 +0000 (0:00:00.761) 0:01:56.900 ******* 2026-02-23 20:56:11.994585 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994588 | orchestrator | 2026-02-23 20:56:11.994592 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-23 20:56:11.994625 | orchestrator | Monday 23 February 2026 20:53:47 +0000 (0:00:03.602) 0:02:00.503 ******* 2026-02-23 20:56:11.994631 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994635 | orchestrator | 2026-02-23 20:56:11.994639 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-23 20:56:11.994648 | orchestrator | Monday 23 February 2026 20:53:50 +0000 (0:00:03.118) 0:02:03.622 ******* 2026-02-23 20:56:11.994652 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-23 20:56:11.994656 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-23 20:56:11.994659 | orchestrator | 2026-02-23 20:56:11.994663 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-23 20:56:11.994667 | orchestrator | Monday 23 February 2026 20:53:57 +0000 (0:00:06.755) 0:02:10.377 ******* 2026-02-23 20:56:11.994671 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994674 | orchestrator | 2026-02-23 20:56:11.994680 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-23 20:56:11.994684 | orchestrator | Monday 23 February 2026 20:54:01 +0000 (0:00:03.910) 0:02:14.288 ******* 2026-02-23 20:56:11.994688 | orchestrator | ok: [testbed-node-0] 2026-02-23 20:56:11.994692 | orchestrator | ok: [testbed-node-1] 2026-02-23 20:56:11.994695 | orchestrator | ok: [testbed-node-2] 2026-02-23 20:56:11.994699 | orchestrator | 2026-02-23 20:56:11.994703 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-23 20:56:11.994706 | orchestrator | Monday 23 February 2026 20:54:01 +0000 (0:00:00.292) 0:02:14.580 ******* 2026-02-23 20:56:11.994712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.994733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.994740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.994747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.994751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.994756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.994760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.994812 | orchestrator | 2026-02-23 20:56:11.994816 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-23 20:56:11.994820 | orchestrator | Monday 23 February 2026 20:54:03 +0000 (0:00:02.064) 0:02:16.645 ******* 2026-02-23 20:56:11.994823 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.994827 | orchestrator | 2026-02-23 20:56:11.994841 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-23 20:56:11.994846 | orchestrator | Monday 23 February 2026 20:54:03 +0000 (0:00:00.122) 0:02:16.767 ******* 2026-02-23 20:56:11.994849 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.994853 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:56:11.994857 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:56:11.994863 | orchestrator | 2026-02-23 20:56:11.994867 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-23 20:56:11.994871 | orchestrator | Monday 23 February 2026 20:54:03 +0000 (0:00:00.397) 0:02:17.164 ******* 2026-02-23 20:56:11.994876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.994881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.994885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.994889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.994893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.994897 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.994912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.994922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.994926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.994930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.994934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.994939 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:56:11.994943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.994959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.994970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.994975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.994980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.994984 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:56:11.994988 | orchestrator | 2026-02-23 20:56:11.994993 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-23 20:56:11.994997 | orchestrator | Monday 23 February 2026 20:54:04 +0000 (0:00:00.599) 0:02:17.763 ******* 2026-02-23 20:56:11.995002 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 20:56:11.995006 | orchestrator | 2026-02-23 20:56:11.995010 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-23 20:56:11.995014 | orchestrator | Monday 23 February 2026 20:54:04 +0000 (0:00:00.489) 0:02:18.253 ******* 2026-02-23 20:56:11.995018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995115 | orchestrator | 2026-02-23 20:56:11.995119 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-23 20:56:11.995124 | orchestrator | Monday 23 February 2026 20:54:09 +0000 (0:00:04.733) 0:02:22.986 ******* 2026-02-23 20:56:11.995130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.995134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.995139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.995154 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.995162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.995166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.995173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.995186 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:56:11.995190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.995197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.995204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.995220 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:56:11.995224 | orchestrator | 2026-02-23 20:56:11.995228 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-23 20:56:11.995233 | orchestrator | Monday 23 February 2026 20:54:10 +0000 (0:00:00.618) 0:02:23.605 ******* 2026-02-23 20:56:11.995237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.995244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.995249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.995266 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.995269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.995273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.995279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-23 20:56:11.995298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.995303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-23 20:56:11.995307 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:56:11.995311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-23 20:56:11.995321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-23 20:56:11.995325 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:56:11.995329 | orchestrator | 2026-02-23 20:56:11.995333 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-23 20:56:11.995337 | orchestrator | Monday 23 February 2026 20:54:11 +0000 (0:00:00.813) 0:02:24.419 ******* 2026-02-23 20:56:11.995343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995421 | orchestrator | 2026-02-23 20:56:11.995425 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-23 20:56:11.995431 | orchestrator | Monday 23 February 2026 20:54:15 +0000 (0:00:04.230) 0:02:28.649 ******* 2026-02-23 20:56:11.995435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-23 20:56:11.995439 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-23 20:56:11.995443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-23 20:56:11.995447 | orchestrator | 2026-02-23 20:56:11.995451 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-23 20:56:11.995455 | orchestrator | Monday 23 February 2026 20:54:17 +0000 (0:00:01.961) 0:02:30.610 ******* 2026-02-23 20:56:11.995459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995563 | orchestrator | 2026-02-23 20:56:11.995569 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-23 20:56:11.995576 | orchestrator | Monday 23 February 2026 20:54:33 +0000 (0:00:15.774) 0:02:46.385 ******* 2026-02-23 20:56:11.995582 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.995588 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.995594 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.995612 | orchestrator | 2026-02-23 20:56:11.995618 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-23 20:56:11.995624 | orchestrator | Monday 23 February 2026 20:54:34 +0000 (0:00:01.476) 0:02:47.861 ******* 2026-02-23 20:56:11.995650 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995655 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995661 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995665 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995669 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995673 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995681 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995685 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995689 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995693 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995697 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995701 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995704 | orchestrator | 2026-02-23 20:56:11.995708 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-23 20:56:11.995712 | orchestrator | Monday 23 February 2026 20:54:40 +0000 (0:00:05.685) 0:02:53.547 ******* 2026-02-23 20:56:11.995718 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995722 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995726 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995729 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995733 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995737 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995743 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995752 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995758 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995764 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995770 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995776 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995782 | orchestrator | 2026-02-23 20:56:11.995788 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-23 20:56:11.995794 | orchestrator | Monday 23 February 2026 20:54:46 +0000 (0:00:05.774) 0:02:59.321 ******* 2026-02-23 20:56:11.995800 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995806 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995813 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-23 20:56:11.995819 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995825 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995832 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-23 20:56:11.995836 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995840 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995843 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-23 20:56:11.995847 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995851 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995854 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-23 20:56:11.995858 | orchestrator | 2026-02-23 20:56:11.995862 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-23 20:56:11.995865 | orchestrator | Monday 23 February 2026 20:54:50 +0000 (0:00:04.652) 0:03:03.974 ******* 2026-02-23 20:56:11.995870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-23 20:56:11.995911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-23 20:56:11.995931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-23 20:56:11.995986 | orchestrator | 2026-02-23 20:56:11.995991 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-23 20:56:11.995995 | orchestrator | Monday 23 February 2026 20:54:53 +0000 (0:00:03.218) 0:03:07.192 ******* 2026-02-23 20:56:11.995999 | orchestrator | skipping: [testbed-node-0] 2026-02-23 20:56:11.996003 | orchestrator | skipping: [testbed-node-1] 2026-02-23 20:56:11.996007 | orchestrator | skipping: [testbed-node-2] 2026-02-23 20:56:11.996010 | orchestrator | 2026-02-23 20:56:11.996014 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-23 20:56:11.996018 | orchestrator | Monday 23 February 2026 20:54:54 +0000 (0:00:00.308) 0:03:07.501 ******* 2026-02-23 20:56:11.996022 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996025 | orchestrator | 2026-02-23 20:56:11.996029 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-23 20:56:11.996034 | orchestrator | Monday 23 February 2026 20:54:55 +0000 (0:00:01.768) 0:03:09.269 ******* 2026-02-23 20:56:11.996037 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996041 | orchestrator | 2026-02-23 20:56:11.996047 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-23 20:56:11.996051 | orchestrator | Monday 23 February 2026 20:54:57 +0000 (0:00:01.829) 0:03:11.098 ******* 2026-02-23 20:56:11.996055 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996058 | orchestrator | 2026-02-23 20:56:11.996062 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-23 20:56:11.996066 | orchestrator | Monday 23 February 2026 20:54:59 +0000 (0:00:02.148) 0:03:13.249 ******* 2026-02-23 20:56:11.996070 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996074 | orchestrator | 2026-02-23 20:56:11.996078 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-23 20:56:11.996081 | orchestrator | Monday 23 February 2026 20:55:03 +0000 (0:00:03.023) 0:03:16.274 ******* 2026-02-23 20:56:11.996085 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996089 | orchestrator | 2026-02-23 20:56:11.996093 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-23 20:56:11.996097 | orchestrator | Monday 23 February 2026 20:55:24 +0000 (0:00:21.352) 0:03:37.627 ******* 2026-02-23 20:56:11.996101 | orchestrator | 2026-02-23 20:56:11.996104 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-23 20:56:11.996108 | orchestrator | Monday 23 February 2026 20:55:24 +0000 (0:00:00.067) 0:03:37.694 ******* 2026-02-23 20:56:11.996112 | orchestrator | 2026-02-23 20:56:11.996116 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-23 20:56:11.996122 | orchestrator | Monday 23 February 2026 20:55:24 +0000 (0:00:00.062) 0:03:37.757 ******* 2026-02-23 20:56:11.996127 | orchestrator | 2026-02-23 20:56:11.996135 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-23 20:56:11.996143 | orchestrator | Monday 23 February 2026 20:55:24 +0000 (0:00:00.068) 0:03:37.825 ******* 2026-02-23 20:56:11.996150 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996155 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.996161 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.996167 | orchestrator | 2026-02-23 20:56:11.996173 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-23 20:56:11.996179 | orchestrator | Monday 23 February 2026 20:55:32 +0000 (0:00:08.364) 0:03:46.189 ******* 2026-02-23 20:56:11.996184 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996190 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.996196 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.996201 | orchestrator | 2026-02-23 20:56:11.996207 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-23 20:56:11.996213 | orchestrator | Monday 23 February 2026 20:55:43 +0000 (0:00:10.170) 0:03:56.360 ******* 2026-02-23 20:56:11.996219 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996225 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.996231 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.996237 | orchestrator | 2026-02-23 20:56:11.996243 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-23 20:56:11.996249 | orchestrator | Monday 23 February 2026 20:55:53 +0000 (0:00:10.417) 0:04:06.777 ******* 2026-02-23 20:56:11.996255 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996260 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.996266 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.996271 | orchestrator | 2026-02-23 20:56:11.996278 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-23 20:56:11.996284 | orchestrator | Monday 23 February 2026 20:56:03 +0000 (0:00:09.971) 0:04:16.749 ******* 2026-02-23 20:56:11.996290 | orchestrator | changed: [testbed-node-0] 2026-02-23 20:56:11.996296 | orchestrator | changed: [testbed-node-1] 2026-02-23 20:56:11.996302 | orchestrator | changed: [testbed-node-2] 2026-02-23 20:56:11.996308 | orchestrator | 2026-02-23 20:56:11.996315 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:56:11.996322 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-23 20:56:11.996328 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:56:11.996335 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-23 20:56:11.996341 | orchestrator | 2026-02-23 20:56:11.996347 | orchestrator | 2026-02-23 20:56:11.996353 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:56:11.996359 | orchestrator | Monday 23 February 2026 20:56:09 +0000 (0:00:05.843) 0:04:22.593 ******* 2026-02-23 20:56:11.996371 | orchestrator | =============================================================================== 2026-02-23 20:56:11.996378 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.35s 2026-02-23 20:56:11.996384 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.77s 2026-02-23 20:56:11.996390 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.66s 2026-02-23 20:56:11.996396 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.41s 2026-02-23 20:56:11.996403 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.42s 2026-02-23 20:56:11.996415 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.17s 2026-02-23 20:56:11.996420 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.97s 2026-02-23 20:56:11.996427 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.88s 2026-02-23 20:56:11.996433 | orchestrator | octavia : Restart octavia-api container --------------------------------- 8.36s 2026-02-23 20:56:11.996439 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.40s 2026-02-23 20:56:11.996448 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.75s 2026-02-23 20:56:11.996455 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.16s 2026-02-23 20:56:11.996461 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.03s 2026-02-23 20:56:11.996468 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.84s 2026-02-23 20:56:11.996475 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.78s 2026-02-23 20:56:11.996482 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.69s 2026-02-23 20:56:11.996488 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.22s 2026-02-23 20:56:11.996495 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 4.80s 2026-02-23 20:56:11.996501 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.73s 2026-02-23 20:56:11.996508 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 4.65s 2026-02-23 20:56:11.996515 | orchestrator | 2026-02-23 20:56:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:15.037766 | orchestrator | 2026-02-23 20:56:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:18.081348 | orchestrator | 2026-02-23 20:56:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:21.126648 | orchestrator | 2026-02-23 20:56:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:24.165669 | orchestrator | 2026-02-23 20:56:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:27.208760 | orchestrator | 2026-02-23 20:56:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:30.252025 | orchestrator | 2026-02-23 20:56:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:33.294868 | orchestrator | 2026-02-23 20:56:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:36.341339 | orchestrator | 2026-02-23 20:56:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:39.386160 | orchestrator | 2026-02-23 20:56:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:42.434472 | orchestrator | 2026-02-23 20:56:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:45.476649 | orchestrator | 2026-02-23 20:56:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:48.511953 | orchestrator | 2026-02-23 20:56:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:51.548936 | orchestrator | 2026-02-23 20:56:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:54.588673 | orchestrator | 2026-02-23 20:56:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:56:57.631108 | orchestrator | 2026-02-23 20:56:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:57:00.681031 | orchestrator | 2026-02-23 20:57:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:57:03.724162 | orchestrator | 2026-02-23 20:57:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:57:06.767118 | orchestrator | 2026-02-23 20:57:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:57:09.810628 | orchestrator | 2026-02-23 20:57:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-23 20:57:12.852040 | orchestrator | 2026-02-23 20:57:13.196535 | orchestrator | 2026-02-23 20:57:13.202228 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Feb 23 20:57:13 UTC 2026 2026-02-23 20:57:13.202307 | orchestrator | 2026-02-23 20:57:13.516948 | orchestrator | ok: Runtime: 0:32:53.879802 2026-02-23 20:57:13.759861 | 2026-02-23 20:57:13.760001 | TASK [Bootstrap services] 2026-02-23 20:57:14.600201 | orchestrator | 2026-02-23 20:57:14.600339 | orchestrator | # BOOTSTRAP 2026-02-23 20:57:14.600350 | orchestrator | 2026-02-23 20:57:14.600356 | orchestrator | + set -e 2026-02-23 20:57:14.600361 | orchestrator | + echo 2026-02-23 20:57:14.600367 | orchestrator | + echo '# BOOTSTRAP' 2026-02-23 20:57:14.600375 | orchestrator | + echo 2026-02-23 20:57:14.600397 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-23 20:57:14.609289 | orchestrator | + set -e 2026-02-23 20:57:14.609628 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-23 20:57:19.136798 | orchestrator | 2026-02-23 20:57:19 | INFO  | It takes a moment until task ad79d3d9-f33c-467c-850b-efe4f810edbc (flavor-manager) has been started and output is visible here. 2026-02-23 20:57:25.891164 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1L-1 created 2026-02-23 20:57:25.891272 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1L-1-5 created 2026-02-23 20:57:25.891293 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1V-2 created 2026-02-23 20:57:25.891307 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1V-2-5 created 2026-02-23 20:57:25.891321 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1V-4 created 2026-02-23 20:57:25.891329 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1V-4-10 created 2026-02-23 20:57:25.891337 | orchestrator | 2026-02-23 20:57:22 | INFO  | Flavor SCS-1V-8 created 2026-02-23 20:57:25.891345 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-1V-8-20 created 2026-02-23 20:57:25.891362 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-2V-4 created 2026-02-23 20:57:25.891370 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-2V-4-10 created 2026-02-23 20:57:25.891377 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-2V-8 created 2026-02-23 20:57:25.891384 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-2V-8-20 created 2026-02-23 20:57:25.891392 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-2V-16 created 2026-02-23 20:57:25.891399 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-2V-16-50 created 2026-02-23 20:57:25.891406 | orchestrator | 2026-02-23 20:57:23 | INFO  | Flavor SCS-4V-8 created 2026-02-23 20:57:25.891413 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-4V-8-20 created 2026-02-23 20:57:25.891421 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-4V-16 created 2026-02-23 20:57:25.891428 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-4V-16-50 created 2026-02-23 20:57:25.891438 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-4V-32 created 2026-02-23 20:57:25.891451 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-4V-32-100 created 2026-02-23 20:57:25.891547 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-8V-16 created 2026-02-23 20:57:25.891561 | orchestrator | 2026-02-23 20:57:24 | INFO  | Flavor SCS-8V-16-50 created 2026-02-23 20:57:25.891571 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-8V-32 created 2026-02-23 20:57:25.891579 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-8V-32-100 created 2026-02-23 20:57:25.891586 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-16V-32 created 2026-02-23 20:57:25.891594 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-16V-32-100 created 2026-02-23 20:57:25.891601 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-2V-4-20s created 2026-02-23 20:57:25.891609 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-4V-8-50s created 2026-02-23 20:57:25.891616 | orchestrator | 2026-02-23 20:57:25 | INFO  | Flavor SCS-8V-32-100s created 2026-02-23 20:57:28.212077 | orchestrator | 2026-02-23 20:57:28 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-23 20:57:38.285247 | orchestrator | 2026-02-23 20:57:38 | INFO  | Prepare task for execution of bootstrap-basic. 2026-02-23 20:57:38.355908 | orchestrator | 2026-02-23 20:57:38 | INFO  | Task 12d2cd0f-b60d-490b-9296-50f073513864 (bootstrap-basic) was prepared for execution. 2026-02-23 20:57:38.355955 | orchestrator | 2026-02-23 20:57:38 | INFO  | It takes a moment until task 12d2cd0f-b60d-490b-9296-50f073513864 (bootstrap-basic) has been started and output is visible here. 2026-02-23 20:58:24.026230 | orchestrator | 2026-02-23 20:58:24.026301 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-23 20:58:24.026310 | orchestrator | 2026-02-23 20:58:24.026317 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-23 20:58:24.026324 | orchestrator | Monday 23 February 2026 20:57:42 +0000 (0:00:00.069) 0:00:00.069 ******* 2026-02-23 20:58:24.026330 | orchestrator | ok: [localhost] 2026-02-23 20:58:24.026337 | orchestrator | 2026-02-23 20:58:24.026442 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-23 20:58:24.026452 | orchestrator | Monday 23 February 2026 20:57:44 +0000 (0:00:01.922) 0:00:01.992 ******* 2026-02-23 20:58:24.026459 | orchestrator | ok: [localhost] 2026-02-23 20:58:24.026465 | orchestrator | 2026-02-23 20:58:24.026471 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-23 20:58:24.026477 | orchestrator | Monday 23 February 2026 20:57:53 +0000 (0:00:08.423) 0:00:10.415 ******* 2026-02-23 20:58:24.026484 | orchestrator | changed: [localhost] 2026-02-23 20:58:24.026490 | orchestrator | 2026-02-23 20:58:24.026496 | orchestrator | TASK [Create public network] *************************************************** 2026-02-23 20:58:24.026502 | orchestrator | Monday 23 February 2026 20:58:01 +0000 (0:00:08.199) 0:00:18.615 ******* 2026-02-23 20:58:24.026508 | orchestrator | changed: [localhost] 2026-02-23 20:58:24.026514 | orchestrator | 2026-02-23 20:58:24.026521 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-23 20:58:24.026531 | orchestrator | Monday 23 February 2026 20:58:06 +0000 (0:00:04.703) 0:00:23.319 ******* 2026-02-23 20:58:24.026537 | orchestrator | changed: [localhost] 2026-02-23 20:58:24.026543 | orchestrator | 2026-02-23 20:58:24.026549 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-23 20:58:24.026555 | orchestrator | Monday 23 February 2026 20:58:12 +0000 (0:00:06.333) 0:00:29.653 ******* 2026-02-23 20:58:24.026561 | orchestrator | changed: [localhost] 2026-02-23 20:58:24.026568 | orchestrator | 2026-02-23 20:58:24.026575 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-23 20:58:24.026581 | orchestrator | Monday 23 February 2026 20:58:16 +0000 (0:00:04.070) 0:00:33.723 ******* 2026-02-23 20:58:24.026587 | orchestrator | changed: [localhost] 2026-02-23 20:58:24.026593 | orchestrator | 2026-02-23 20:58:24.026606 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-23 20:58:24.026612 | orchestrator | Monday 23 February 2026 20:58:20 +0000 (0:00:03.845) 0:00:37.569 ******* 2026-02-23 20:58:24.026618 | orchestrator | ok: [localhost] 2026-02-23 20:58:24.026624 | orchestrator | 2026-02-23 20:58:24.026630 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 20:58:24.026636 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-23 20:58:24.026642 | orchestrator | 2026-02-23 20:58:24.026649 | orchestrator | 2026-02-23 20:58:24.026655 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 20:58:24.026662 | orchestrator | Monday 23 February 2026 20:58:23 +0000 (0:00:03.453) 0:00:41.022 ******* 2026-02-23 20:58:24.026668 | orchestrator | =============================================================================== 2026-02-23 20:58:24.026675 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.42s 2026-02-23 20:58:24.026681 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.20s 2026-02-23 20:58:24.026687 | orchestrator | Set public network to default ------------------------------------------- 6.33s 2026-02-23 20:58:24.026704 | orchestrator | Create public network --------------------------------------------------- 4.70s 2026-02-23 20:58:24.026710 | orchestrator | Create public subnet ---------------------------------------------------- 4.07s 2026-02-23 20:58:24.026716 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2026-02-23 20:58:24.026723 | orchestrator | Create manager role ----------------------------------------------------- 3.45s 2026-02-23 20:58:24.026729 | orchestrator | Gathering Facts --------------------------------------------------------- 1.92s 2026-02-23 20:58:26.616648 | orchestrator | 2026-02-23 20:58:26 | INFO  | It takes a moment until task c3eb4f5a-d34d-4039-868c-07ee4cbfc767 (image-manager) has been started and output is visible here. 2026-02-23 20:59:07.901908 | orchestrator | 2026-02-23 20:58:29 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-23 20:59:07.902068 | orchestrator | 2026-02-23 20:58:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-23 20:59:07.902083 | orchestrator | 2026-02-23 20:58:29 | INFO  | Importing image Cirros 0.6.2 2026-02-23 20:59:07.902091 | orchestrator | 2026-02-23 20:58:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-23 20:59:07.902099 | orchestrator | 2026-02-23 20:58:31 | INFO  | Waiting for image to leave queued state... 2026-02-23 20:59:07.902107 | orchestrator | 2026-02-23 20:58:33 | INFO  | Waiting for import to complete... 2026-02-23 20:59:07.902116 | orchestrator | 2026-02-23 20:58:44 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-23 20:59:07.902124 | orchestrator | 2026-02-23 20:58:44 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-23 20:59:07.902153 | orchestrator | 2026-02-23 20:58:44 | INFO  | Setting internal_version = 0.6.2 2026-02-23 20:59:07.902161 | orchestrator | 2026-02-23 20:58:44 | INFO  | Setting image_original_user = cirros 2026-02-23 20:59:07.902168 | orchestrator | 2026-02-23 20:58:44 | INFO  | Adding tag os:cirros 2026-02-23 20:59:07.902175 | orchestrator | 2026-02-23 20:58:44 | INFO  | Setting property architecture: x86_64 2026-02-23 20:59:07.902181 | orchestrator | 2026-02-23 20:58:45 | INFO  | Setting property hw_disk_bus: scsi 2026-02-23 20:59:07.902188 | orchestrator | 2026-02-23 20:58:45 | INFO  | Setting property hw_rng_model: virtio 2026-02-23 20:59:07.902195 | orchestrator | 2026-02-23 20:58:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-23 20:59:07.902202 | orchestrator | 2026-02-23 20:58:45 | INFO  | Setting property hw_watchdog_action: reset 2026-02-23 20:59:07.902208 | orchestrator | 2026-02-23 20:58:46 | INFO  | Setting property hypervisor_type: qemu 2026-02-23 20:59:07.902214 | orchestrator | 2026-02-23 20:58:46 | INFO  | Setting property os_distro: cirros 2026-02-23 20:59:07.902221 | orchestrator | 2026-02-23 20:58:46 | INFO  | Setting property os_purpose: minimal 2026-02-23 20:59:07.902227 | orchestrator | 2026-02-23 20:58:46 | INFO  | Setting property replace_frequency: never 2026-02-23 20:59:07.902234 | orchestrator | 2026-02-23 20:58:46 | INFO  | Setting property uuid_validity: none 2026-02-23 20:59:07.902240 | orchestrator | 2026-02-23 20:58:46 | INFO  | Setting property provided_until: none 2026-02-23 20:59:07.902247 | orchestrator | 2026-02-23 20:58:47 | INFO  | Setting property image_description: Cirros 2026-02-23 20:59:07.902253 | orchestrator | 2026-02-23 20:58:47 | INFO  | Setting property image_name: Cirros 2026-02-23 20:59:07.902338 | orchestrator | 2026-02-23 20:58:47 | INFO  | Setting property internal_version: 0.6.2 2026-02-23 20:59:07.902370 | orchestrator | 2026-02-23 20:58:47 | INFO  | Setting property image_original_user: cirros 2026-02-23 20:59:07.902375 | orchestrator | 2026-02-23 20:58:47 | INFO  | Setting property os_version: 0.6.2 2026-02-23 20:59:07.902387 | orchestrator | 2026-02-23 20:58:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-23 20:59:07.902392 | orchestrator | 2026-02-23 20:58:48 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-23 20:59:07.902396 | orchestrator | 2026-02-23 20:58:48 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-23 20:59:07.902400 | orchestrator | 2026-02-23 20:58:48 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-23 20:59:07.902416 | orchestrator | 2026-02-23 20:58:48 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-23 20:59:07.902430 | orchestrator | 2026-02-23 20:58:48 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-23 20:59:07.902435 | orchestrator | 2026-02-23 20:58:48 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-23 20:59:07.902440 | orchestrator | 2026-02-23 20:58:48 | INFO  | Importing image Cirros 0.6.3 2026-02-23 20:59:07.902445 | orchestrator | 2026-02-23 20:58:48 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-23 20:59:07.902449 | orchestrator | 2026-02-23 20:58:50 | INFO  | Waiting for image to leave queued state... 2026-02-23 20:59:07.902454 | orchestrator | 2026-02-23 20:58:52 | INFO  | Waiting for import to complete... 2026-02-23 20:59:07.902473 | orchestrator | 2026-02-23 20:59:02 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-23 20:59:07.902478 | orchestrator | 2026-02-23 20:59:03 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-23 20:59:07.902484 | orchestrator | 2026-02-23 20:59:03 | INFO  | Setting internal_version = 0.6.3 2026-02-23 20:59:07.902490 | orchestrator | 2026-02-23 20:59:03 | INFO  | Setting image_original_user = cirros 2026-02-23 20:59:07.902496 | orchestrator | 2026-02-23 20:59:03 | INFO  | Adding tag os:cirros 2026-02-23 20:59:07.902502 | orchestrator | 2026-02-23 20:59:03 | INFO  | Setting property architecture: x86_64 2026-02-23 20:59:07.902511 | orchestrator | 2026-02-23 20:59:04 | INFO  | Setting property hw_disk_bus: scsi 2026-02-23 20:59:07.902516 | orchestrator | 2026-02-23 20:59:04 | INFO  | Setting property hw_rng_model: virtio 2026-02-23 20:59:07.902523 | orchestrator | 2026-02-23 20:59:04 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-23 20:59:07.902529 | orchestrator | 2026-02-23 20:59:04 | INFO  | Setting property hw_watchdog_action: reset 2026-02-23 20:59:07.902535 | orchestrator | 2026-02-23 20:59:04 | INFO  | Setting property hypervisor_type: qemu 2026-02-23 20:59:07.902541 | orchestrator | 2026-02-23 20:59:05 | INFO  | Setting property os_distro: cirros 2026-02-23 20:59:07.902548 | orchestrator | 2026-02-23 20:59:05 | INFO  | Setting property os_purpose: minimal 2026-02-23 20:59:07.902555 | orchestrator | 2026-02-23 20:59:05 | INFO  | Setting property replace_frequency: never 2026-02-23 20:59:07.902562 | orchestrator | 2026-02-23 20:59:05 | INFO  | Setting property uuid_validity: none 2026-02-23 20:59:07.902569 | orchestrator | 2026-02-23 20:59:05 | INFO  | Setting property provided_until: none 2026-02-23 20:59:07.902575 | orchestrator | 2026-02-23 20:59:05 | INFO  | Setting property image_description: Cirros 2026-02-23 20:59:07.902582 | orchestrator | 2026-02-23 20:59:06 | INFO  | Setting property image_name: Cirros 2026-02-23 20:59:07.902595 | orchestrator | 2026-02-23 20:59:06 | INFO  | Setting property internal_version: 0.6.3 2026-02-23 20:59:07.902600 | orchestrator | 2026-02-23 20:59:06 | INFO  | Setting property image_original_user: cirros 2026-02-23 20:59:07.902604 | orchestrator | 2026-02-23 20:59:06 | INFO  | Setting property os_version: 0.6.3 2026-02-23 20:59:07.902609 | orchestrator | 2026-02-23 20:59:06 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-23 20:59:07.902613 | orchestrator | 2026-02-23 20:59:06 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-23 20:59:07.902618 | orchestrator | 2026-02-23 20:59:07 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-23 20:59:07.902622 | orchestrator | 2026-02-23 20:59:07 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-23 20:59:07.902627 | orchestrator | 2026-02-23 20:59:07 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-23 20:59:08.226226 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-23 20:59:10.469363 | orchestrator | 2026-02-23 20:59:10 | INFO  | date: 2026-02-23 2026-02-23 20:59:10.469454 | orchestrator | 2026-02-23 20:59:10 | INFO  | image: octavia-amphora-haproxy-2024.2.20260223.qcow2 2026-02-23 20:59:10.469484 | orchestrator | 2026-02-23 20:59:10 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260223.qcow2 2026-02-23 20:59:10.469495 | orchestrator | 2026-02-23 20:59:10 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260223.qcow2.CHECKSUM 2026-02-23 20:59:10.608810 | orchestrator | 2026-02-23 20:59:10 | INFO  | checksum: 8fa1c871cfb41c51427d243d56beff3796ad151e7381526e195fa2edff653330 2026-02-23 20:59:10.682091 | orchestrator | 2026-02-23 20:59:10 | INFO  | It takes a moment until task 5f5c5706-9650-413c-a712-2e5567addd36 (image-manager) has been started and output is visible here. 2026-02-23 21:00:11.093934 | orchestrator | 2026-02-23 20:59:12 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-23' 2026-02-23 21:00:11.093989 | orchestrator | 2026-02-23 20:59:12 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260223.qcow2: 200 2026-02-23 21:00:11.093997 | orchestrator | 2026-02-23 20:59:12 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-23 2026-02-23 21:00:11.094002 | orchestrator | 2026-02-23 20:59:12 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260223.qcow2 2026-02-23 21:00:11.094006 | orchestrator | 2026-02-23 20:59:14 | INFO  | Waiting for image to leave queued state... 2026-02-23 21:00:11.094010 | orchestrator | 2026-02-23 20:59:16 | INFO  | Waiting for import to complete... 2026-02-23 21:00:11.094059 | orchestrator | 2026-02-23 20:59:26 | INFO  | Waiting for import to complete... 2026-02-23 21:00:11.094063 | orchestrator | 2026-02-23 20:59:36 | INFO  | Waiting for import to complete... 2026-02-23 21:00:11.094067 | orchestrator | 2026-02-23 20:59:46 | INFO  | Waiting for import to complete... 2026-02-23 21:00:11.094073 | orchestrator | 2026-02-23 20:59:56 | INFO  | Waiting for import to complete... 2026-02-23 21:00:11.094077 | orchestrator | 2026-02-23 21:00:06 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-23' successfully completed, reloading images 2026-02-23 21:00:11.094082 | orchestrator | 2026-02-23 21:00:07 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-23' 2026-02-23 21:00:11.094097 | orchestrator | 2026-02-23 21:00:07 | INFO  | Setting internal_version = 2026-02-23 2026-02-23 21:00:11.094101 | orchestrator | 2026-02-23 21:00:07 | INFO  | Setting image_original_user = ubuntu 2026-02-23 21:00:11.094106 | orchestrator | 2026-02-23 21:00:07 | INFO  | Adding tag amphora 2026-02-23 21:00:11.094110 | orchestrator | 2026-02-23 21:00:07 | INFO  | Adding tag os:ubuntu 2026-02-23 21:00:11.094114 | orchestrator | 2026-02-23 21:00:07 | INFO  | Setting property architecture: x86_64 2026-02-23 21:00:11.094118 | orchestrator | 2026-02-23 21:00:07 | INFO  | Setting property hw_disk_bus: scsi 2026-02-23 21:00:11.094122 | orchestrator | 2026-02-23 21:00:07 | INFO  | Setting property hw_rng_model: virtio 2026-02-23 21:00:11.094126 | orchestrator | 2026-02-23 21:00:07 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-23 21:00:11.094130 | orchestrator | 2026-02-23 21:00:08 | INFO  | Setting property hw_watchdog_action: reset 2026-02-23 21:00:11.094134 | orchestrator | 2026-02-23 21:00:08 | INFO  | Setting property hypervisor_type: qemu 2026-02-23 21:00:11.094167 | orchestrator | 2026-02-23 21:00:08 | INFO  | Setting property os_distro: ubuntu 2026-02-23 21:00:11.094171 | orchestrator | 2026-02-23 21:00:08 | INFO  | Setting property replace_frequency: quarterly 2026-02-23 21:00:11.094175 | orchestrator | 2026-02-23 21:00:08 | INFO  | Setting property uuid_validity: last-1 2026-02-23 21:00:11.094179 | orchestrator | 2026-02-23 21:00:09 | INFO  | Setting property provided_until: none 2026-02-23 21:00:11.094183 | orchestrator | 2026-02-23 21:00:09 | INFO  | Setting property os_purpose: network 2026-02-23 21:00:11.094186 | orchestrator | 2026-02-23 21:00:09 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-23 21:00:11.094197 | orchestrator | 2026-02-23 21:00:09 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-23 21:00:11.094201 | orchestrator | 2026-02-23 21:00:09 | INFO  | Setting property internal_version: 2026-02-23 2026-02-23 21:00:11.094205 | orchestrator | 2026-02-23 21:00:09 | INFO  | Setting property image_original_user: ubuntu 2026-02-23 21:00:11.094209 | orchestrator | 2026-02-23 21:00:10 | INFO  | Setting property os_version: 2026-02-23 2026-02-23 21:00:11.094213 | orchestrator | 2026-02-23 21:00:10 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260223.qcow2 2026-02-23 21:00:11.094217 | orchestrator | 2026-02-23 21:00:10 | INFO  | Setting property image_build_date: 2026-02-23 2026-02-23 21:00:11.094221 | orchestrator | 2026-02-23 21:00:10 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-23' 2026-02-23 21:00:11.094225 | orchestrator | 2026-02-23 21:00:10 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-23' 2026-02-23 21:00:11.094229 | orchestrator | 2026-02-23 21:00:10 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-23 21:00:11.094241 | orchestrator | 2026-02-23 21:00:10 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-23 21:00:11.094246 | orchestrator | 2026-02-23 21:00:10 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-23 21:00:11.094250 | orchestrator | 2026-02-23 21:00:10 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-23 21:00:11.467013 | orchestrator | ok: Runtime: 0:02:57.242066 2026-02-23 21:00:11.482436 | 2026-02-23 21:00:11.482552 | TASK [Run checks] 2026-02-23 21:00:12.154929 | orchestrator | + set -e 2026-02-23 21:00:12.155027 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-23 21:00:12.155039 | orchestrator | ++ export INTERACTIVE=false 2026-02-23 21:00:12.155050 | orchestrator | ++ INTERACTIVE=false 2026-02-23 21:00:12.155058 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-23 21:00:12.155064 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-23 21:00:12.155070 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-23 21:00:12.156146 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-23 21:00:12.162067 | orchestrator | 2026-02-23 21:00:12.162195 | orchestrator | # CHECK 2026-02-23 21:00:12.162208 | orchestrator | 2026-02-23 21:00:12.162216 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 21:00:12.162225 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 21:00:12.162232 | orchestrator | + echo 2026-02-23 21:00:12.162239 | orchestrator | + echo '# CHECK' 2026-02-23 21:00:12.162245 | orchestrator | + echo 2026-02-23 21:00:12.162258 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-23 21:00:12.162628 | orchestrator | ++ semver latest 5.0.0 2026-02-23 21:00:12.223993 | orchestrator | 2026-02-23 21:00:12.224059 | orchestrator | ## Containers @ testbed-manager 2026-02-23 21:00:12.224069 | orchestrator | 2026-02-23 21:00:12.224084 | orchestrator | + [[ -1 -eq -1 ]] 2026-02-23 21:00:12.224091 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 21:00:12.224097 | orchestrator | + echo 2026-02-23 21:00:12.224104 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-23 21:00:12.224112 | orchestrator | + echo 2026-02-23 21:00:12.224119 | orchestrator | + osism container testbed-manager ps 2026-02-23 21:00:14.270442 | orchestrator | 2026-02-23 21:00:14 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-23 21:00:14.658586 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-23 21:00:14.658736 | orchestrator | be141e6c40b3 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2026-02-23 21:00:14.658763 | orchestrator | 35c94d9ee64f registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2026-02-23 21:00:14.658776 | orchestrator | ef4f4743faf9 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-02-23 21:00:14.658783 | orchestrator | fb590844f245 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-02-23 21:00:14.658794 | orchestrator | c27798049a88 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2026-02-23 21:00:14.658801 | orchestrator | f1d95182963e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-02-23 21:00:14.658809 | orchestrator | 7eaba97689db registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-02-23 21:00:14.658817 | orchestrator | 48ba2841f92b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-02-23 21:00:14.658846 | orchestrator | aa049fbd8095 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-02-23 21:00:14.658854 | orchestrator | 40fd4c8cf041 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2026-02-23 21:00:14.658862 | orchestrator | 4303df716a9d registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2026-02-23 21:00:14.658870 | orchestrator | f52d08498b2b registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2026-02-23 21:00:14.658878 | orchestrator | 9327880956bc registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-23 21:00:14.658885 | orchestrator | f83815adf4dc registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2026-02-23 21:00:14.658893 | orchestrator | ce6dca52fb5d registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-02-23 21:00:14.658918 | orchestrator | 73a6e6305d28 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) osism-ansible 2026-02-23 21:00:14.658932 | orchestrator | c3afd9a5e100 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-02-23 21:00:14.658939 | orchestrator | 20173a16122b registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-02-23 21:00:14.658946 | orchestrator | f4b316dc3437 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 57 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2026-02-23 21:00:14.658954 | orchestrator | d80642bad605 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-listener-1 2026-02-23 21:00:14.658960 | orchestrator | 5b436d97818f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-23 21:00:14.658967 | orchestrator | 375255d615ab registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-23 21:00:14.658974 | orchestrator | 53f9cf4fcb72 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-flower-1 2026-02-23 21:00:14.658987 | orchestrator | 4de29a77471d registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2026-02-23 21:00:14.658994 | orchestrator | 5fa139f22426 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-beat-1 2026-02-23 21:00:14.659001 | orchestrator | 0c4f8fb45c70 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-openstack-1 2026-02-23 21:00:14.659007 | orchestrator | e7d997b33ad7 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 37 minutes (healthy) osismclient 2026-02-23 21:00:14.659013 | orchestrator | ed2b893d04f4 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2026-02-23 21:00:14.659019 | orchestrator | c9c2a19ab861 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-23 21:00:14.963077 | orchestrator | 2026-02-23 21:00:14.963185 | orchestrator | ## Images @ testbed-manager 2026-02-23 21:00:14.963196 | orchestrator | 2026-02-23 21:00:14.963204 | orchestrator | + echo 2026-02-23 21:00:14.963211 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-23 21:00:14.963218 | orchestrator | + echo 2026-02-23 21:00:14.963229 | orchestrator | + osism container testbed-manager images 2026-02-23 21:00:17.401862 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-23 21:00:17.401915 | orchestrator | registry.osism.tech/osism/osism-ansible latest c6f06501950d About an hour ago 613MB 2026-02-23 21:00:17.401920 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest bc7ae44c2533 4 hours ago 335MB 2026-02-23 21:00:17.401931 | orchestrator | registry.osism.tech/osism/osism-frontend latest f8eaa12877cb 4 hours ago 232MB 2026-02-23 21:00:17.401938 | orchestrator | registry.osism.tech/osism/osism latest e54764b5892b 4 hours ago 409MB 2026-02-23 21:00:17.401950 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 3e84edb92a41 17 hours ago 239MB 2026-02-23 21:00:17.401954 | orchestrator | registry.osism.tech/osism/cephclient reef ebb02e0df028 17 hours ago 453MB 2026-02-23 21:00:17.401958 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f894a5bc6d0b 19 hours ago 673MB 2026-02-23 21:00:17.401962 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5b921837d137 19 hours ago 271MB 2026-02-23 21:00:17.401965 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 743bb34b6cee 19 hours ago 584MB 2026-02-23 21:00:17.401969 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 278eaa73d02c 19 hours ago 311MB 2026-02-23 21:00:17.401975 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 23416dcdd72e 19 hours ago 409MB 2026-02-23 21:00:17.401980 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 ffe2e0b2e550 19 hours ago 313MB 2026-02-23 21:00:17.401988 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 290ded05e43a 19 hours ago 844MB 2026-02-23 21:00:17.401995 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 4e4cb8e72dfc 19 hours ago 363MB 2026-02-23 21:00:17.402030 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 5b1c323f0a81 21 hours ago 610MB 2026-02-23 21:00:17.402036 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 32525e23992a 21 hours ago 559MB 2026-02-23 21:00:17.402039 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 78262c5d1463 21 hours ago 1.22GB 2026-02-23 21:00:17.402043 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 3 weeks ago 41.4MB 2026-02-23 21:00:17.402046 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-23 21:00:17.402050 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-23 21:00:17.402053 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-23 21:00:17.402057 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-02-23 21:00:17.402060 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-02-23 21:00:17.402064 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-23 21:00:17.726407 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-23 21:00:17.726829 | orchestrator | ++ semver latest 5.0.0 2026-02-23 21:00:17.773736 | orchestrator | 2026-02-23 21:00:17.773795 | orchestrator | ## Containers @ testbed-node-0 2026-02-23 21:00:17.773802 | orchestrator | 2026-02-23 21:00:17.773807 | orchestrator | + [[ -1 -eq -1 ]] 2026-02-23 21:00:17.773811 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 21:00:17.773816 | orchestrator | + echo 2026-02-23 21:00:17.773821 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-23 21:00:17.773826 | orchestrator | + echo 2026-02-23 21:00:17.773832 | orchestrator | + osism container testbed-node-0 ps 2026-02-23 21:00:20.235855 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-23 21:00:20.235924 | orchestrator | e3f353b54d43 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-02-23 21:00:20.235934 | orchestrator | 715064c23751 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-02-23 21:00:20.235942 | orchestrator | e11dd42b12bf registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-02-23 21:00:20.235949 | orchestrator | fab04d041f42 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-02-23 21:00:20.235955 | orchestrator | 57edc351255d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-02-23 21:00:20.235962 | orchestrator | e98287d5c7c0 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-02-23 21:00:20.235969 | orchestrator | 95f836521c93 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-02-23 21:00:20.235985 | orchestrator | 7004219311bc registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-02-23 21:00:20.235990 | orchestrator | 0cd968d8be6c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-02-23 21:00:20.236005 | orchestrator | ed2f7b2480eb registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-02-23 21:00:20.236009 | orchestrator | 82da6e3a8528 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-02-23 21:00:20.236013 | orchestrator | fe0d9491218e registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-02-23 21:00:20.236018 | orchestrator | d859cd3c7b94 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-02-23 21:00:20.236022 | orchestrator | 8547f89def6e registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-02-23 21:00:20.236026 | orchestrator | 1af46899364c registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-02-23 21:00:20.236031 | orchestrator | 7f673431bb0b registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-02-23 21:00:20.236035 | orchestrator | 653a60ebcc9a registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-02-23 21:00:20.236040 | orchestrator | 70215d177da0 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-02-23 21:00:20.236045 | orchestrator | c78a2682c720 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes prometheus_mysqld_exporter 2026-02-23 21:00:20.236049 | orchestrator | 34e97b1fede0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-02-23 21:00:20.236054 | orchestrator | 3089cce330e7 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2026-02-23 21:00:20.236068 | orchestrator | 4bfc484cdf98 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-02-23 21:00:20.236073 | orchestrator | 5a02c1b79bac registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-02-23 21:00:20.236077 | orchestrator | fe9dfaefa8db registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-02-23 21:00:20.236081 | orchestrator | 7a69d9255e17 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-02-23 21:00:20.236088 | orchestrator | 9af22074fa4a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-02-23 21:00:20.236092 | orchestrator | 8a3f88ac864c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-02-23 21:00:20.236097 | orchestrator | 5cb31841e641 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-02-23 21:00:20.236104 | orchestrator | 8ea2f75860f7 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-02-23 21:00:20.236110 | orchestrator | 7a2264909e43 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-02-23 21:00:20.236115 | orchestrator | b454e2f487ad registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-02-23 21:00:20.236131 | orchestrator | 348776df05a6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-02-23 21:00:20.236135 | orchestrator | de8fa35de0a6 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-02-23 21:00:20.236140 | orchestrator | 80d58816a0c5 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-02-23 21:00:20.236147 | orchestrator | 6a23df959ec1 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-02-23 21:00:20.236154 | orchestrator | 8fe6820c4400 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-02-23 21:00:20.236161 | orchestrator | d1edef09acf5 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-02-23 21:00:20.236167 | orchestrator | a96bb34cab96 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-02-23 21:00:20.236173 | orchestrator | dd05b53361dc registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-02-23 21:00:20.236179 | orchestrator | 85d0485d813c registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-02-23 21:00:20.236185 | orchestrator | 7c6546e69235 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-02-23 21:00:20.236192 | orchestrator | 9d3db2326f82 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-02-23 21:00:20.236198 | orchestrator | b0b86842d216 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-02-23 21:00:20.236205 | orchestrator | 15e5b9fef6f1 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-02-23 21:00:20.236213 | orchestrator | 7a1fbf8722e5 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-02-23 21:00:20.236217 | orchestrator | 5bb36b27701d registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-02-23 21:00:20.236220 | orchestrator | 6bff7827bfc4 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-02-23 21:00:20.236224 | orchestrator | c8f850e50861 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2026-02-23 21:00:20.236231 | orchestrator | 1425fd227386 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-02-23 21:00:20.236235 | orchestrator | d4278161478e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2026-02-23 21:00:20.236239 | orchestrator | 95c863a571bb registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-02-23 21:00:20.236243 | orchestrator | 99791c169561 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-02-23 21:00:20.236246 | orchestrator | cebf9cd6e4f6 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-02-23 21:00:20.236250 | orchestrator | 47169dfd7f1a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-02-23 21:00:20.236256 | orchestrator | 680a8bc5ced4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-02-23 21:00:20.236260 | orchestrator | 8fb8eaa558e5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-02-23 21:00:20.236264 | orchestrator | 83d3e3c07897 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-02-23 21:00:20.236267 | orchestrator | 10f86042c95a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-02-23 21:00:20.236271 | orchestrator | 7cdc24b04015 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-02-23 21:00:20.572529 | orchestrator | 2026-02-23 21:00:20.572584 | orchestrator | ## Images @ testbed-node-0 2026-02-23 21:00:20.572592 | orchestrator | 2026-02-23 21:00:20.572598 | orchestrator | + echo 2026-02-23 21:00:20.572603 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-23 21:00:20.572609 | orchestrator | + echo 2026-02-23 21:00:20.572614 | orchestrator | + osism container testbed-node-0 images 2026-02-23 21:00:22.938006 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-23 21:00:22.938213 | orchestrator | registry.osism.tech/osism/ceph-daemon reef ff59b29334dd 17 hours ago 1.27GB 2026-02-23 21:00:22.938229 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 3652d3a7de56 19 hours ago 272MB 2026-02-23 21:00:22.938236 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 dbe28c2fd6c3 19 hours ago 418MB 2026-02-23 21:00:22.938243 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f894a5bc6d0b 19 hours ago 673MB 2026-02-23 21:00:22.938250 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 b1fd4c643197 19 hours ago 279MB 2026-02-23 21:00:22.938257 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5b921837d137 19 hours ago 271MB 2026-02-23 21:00:22.938265 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 540f6409fbf5 19 hours ago 328MB 2026-02-23 21:00:22.938273 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 13679e653294 19 hours ago 282MB 2026-02-23 21:00:22.938280 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 37fbf97e7f6f 19 hours ago 1.02GB 2026-02-23 21:00:22.938287 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 743bb34b6cee 19 hours ago 584MB 2026-02-23 21:00:22.938314 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 c159d79f20f0 19 hours ago 1.53GB 2026-02-23 21:00:22.938321 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 80d69bb5f098 19 hours ago 1.56GB 2026-02-23 21:00:22.938327 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 4d41d184aa52 19 hours ago 284MB 2026-02-23 21:00:22.938334 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 7e08fe973c8e 19 hours ago 284MB 2026-02-23 21:00:22.938340 | orchestrator | registry.osism.tech/kolla/redis 2024.2 0fa4d8b23966 19 hours ago 278MB 2026-02-23 21:00:22.938347 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 2052a297db29 19 hours ago 278MB 2026-02-23 21:00:22.938369 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 4d92e9384e36 19 hours ago 1.15GB 2026-02-23 21:00:22.938376 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 5dbd7b932e63 19 hours ago 457MB 2026-02-23 21:00:22.938382 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 278eaa73d02c 19 hours ago 311MB 2026-02-23 21:00:22.938388 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 13638ae36e57 19 hours ago 297MB 2026-02-23 21:00:22.938395 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 8ff5a6b3cd16 19 hours ago 304MB 2026-02-23 21:00:22.938401 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d9cb73d898bb 19 hours ago 306MB 2026-02-23 21:00:22.938407 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 4e4cb8e72dfc 19 hours ago 363MB 2026-02-23 21:00:22.938414 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 51d6ef378bbd 19 hours ago 846MB 2026-02-23 21:00:22.938420 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 25bb84f2289b 19 hours ago 846MB 2026-02-23 21:00:22.938427 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 47dae0f63bd5 19 hours ago 846MB 2026-02-23 21:00:22.938434 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 342e8da8ac3a 19 hours ago 846MB 2026-02-23 21:00:22.938441 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 189daad12150 19 hours ago 1.17GB 2026-02-23 21:00:22.938448 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 624955c2c115 19 hours ago 1.06GB 2026-02-23 21:00:22.938455 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 9b80c75ebf88 19 hours ago 1.03GB 2026-02-23 21:00:22.938462 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3f4a0924d9ed 19 hours ago 1.06GB 2026-02-23 21:00:22.938469 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 741914d697fa 19 hours ago 1.03GB 2026-02-23 21:00:22.938476 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 45d576cfc14a 19 hours ago 1.03GB 2026-02-23 21:00:22.938482 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 1f7f7078acd6 19 hours ago 995MB 2026-02-23 21:00:22.938489 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 d6dc835952f4 19 hours ago 1.05GB 2026-02-23 21:00:22.938495 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 db0ba08b76e7 19 hours ago 1.42GB 2026-02-23 21:00:22.938525 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 aad4ba476f1c 19 hours ago 1.72GB 2026-02-23 21:00:22.938533 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3102035cbd3f 19 hours ago 1.41GB 2026-02-23 21:00:22.938541 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffa30f3933ed 19 hours ago 1.41GB 2026-02-23 21:00:22.938548 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 e4b48b9cb79e 19 hours ago 981MB 2026-02-23 21:00:22.938564 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 e3b7812cd6ff 19 hours ago 990MB 2026-02-23 21:00:22.938572 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8ab2ea11494a 19 hours ago 989MB 2026-02-23 21:00:22.938579 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 13bd17ae9e62 19 hours ago 994MB 2026-02-23 21:00:22.938586 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2e44006b8d8c 19 hours ago 990MB 2026-02-23 21:00:22.938595 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 7b60bbae1ff0 19 hours ago 994MB 2026-02-23 21:00:22.938608 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 a28e142a95bf 19 hours ago 990MB 2026-02-23 21:00:22.938615 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0eec7e0f7b33 19 hours ago 996MB 2026-02-23 21:00:22.938622 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 8c429b2daef4 19 hours ago 996MB 2026-02-23 21:00:22.938629 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9d4b161f78ee 19 hours ago 996MB 2026-02-23 21:00:22.938635 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 3770c4e50a62 19 hours ago 1.04GB 2026-02-23 21:00:22.938641 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a75dcc713ad6 19 hours ago 1.05GB 2026-02-23 21:00:22.938648 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f3afb1708fa9 19 hours ago 1.07GB 2026-02-23 21:00:22.938655 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 7e0dfe6dfac8 19 hours ago 1.22GB 2026-02-23 21:00:22.938662 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b141835a8ce3 19 hours ago 1.22GB 2026-02-23 21:00:22.938669 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0e230dfe273c 19 hours ago 1.37GB 2026-02-23 21:00:22.938676 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 1d5f182c3a75 19 hours ago 1.22GB 2026-02-23 21:00:22.938684 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 1d3d06d0cc72 19 hours ago 979MB 2026-02-23 21:00:22.938693 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 d7ef40f551b0 19 hours ago 979MB 2026-02-23 21:00:22.938702 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 826ad4a5837c 19 hours ago 979MB 2026-02-23 21:00:22.938710 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 9fa396903596 19 hours ago 979MB 2026-02-23 21:00:22.938720 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 2d0736f4b996 19 hours ago 981MB 2026-02-23 21:00:22.938729 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 112e904da6ac 19 hours ago 982MB 2026-02-23 21:00:22.938738 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 5f8755d26e30 19 hours ago 1.13GB 2026-02-23 21:00:22.938747 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 ad0e7877ae1a 19 hours ago 1.25GB 2026-02-23 21:00:22.938756 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4e1be079f392 19 hours ago 1.1GB 2026-02-23 21:00:23.242169 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-23 21:00:23.242418 | orchestrator | ++ semver latest 5.0.0 2026-02-23 21:00:23.290418 | orchestrator | 2026-02-23 21:00:23.290474 | orchestrator | ## Containers @ testbed-node-1 2026-02-23 21:00:23.290480 | orchestrator | 2026-02-23 21:00:23.290485 | orchestrator | + [[ -1 -eq -1 ]] 2026-02-23 21:00:23.290489 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 21:00:23.290494 | orchestrator | + echo 2026-02-23 21:00:23.290498 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-23 21:00:23.290503 | orchestrator | + echo 2026-02-23 21:00:23.290507 | orchestrator | + osism container testbed-node-1 ps 2026-02-23 21:00:25.725570 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-23 21:00:25.725685 | orchestrator | d1408acd9d1f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-02-23 21:00:25.725695 | orchestrator | b5eb104843bf registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-02-23 21:00:25.725706 | orchestrator | d7245a8dd887 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-02-23 21:00:25.725715 | orchestrator | 7476be8dbee9 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-02-23 21:00:25.725719 | orchestrator | 6d256e88f73c registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-02-23 21:00:25.725739 | orchestrator | 8a97c10f4c2d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-02-23 21:00:25.725744 | orchestrator | a30c7b0ec30e registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-02-23 21:00:25.725748 | orchestrator | 5e4301d2282f registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-02-23 21:00:25.725756 | orchestrator | 73b7a5445e47 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-02-23 21:00:25.725760 | orchestrator | ec1f8795a46d registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-23 21:00:25.725764 | orchestrator | 0716af3595da registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-02-23 21:00:25.725767 | orchestrator | 31be6ebb6de2 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-02-23 21:00:25.725771 | orchestrator | 938d2e52da08 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-02-23 21:00:25.725776 | orchestrator | 167c27ea3c2c registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_api 2026-02-23 21:00:25.725780 | orchestrator | 8d3c5f8a6918 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-02-23 21:00:25.725783 | orchestrator | 0df7a178b913 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-02-23 21:00:25.725788 | orchestrator | 45a43da9bb5d registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-02-23 21:00:25.725792 | orchestrator | a6d23d5892eb registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-02-23 21:00:25.725796 | orchestrator | eea7bcd104cd registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-02-23 21:00:25.725815 | orchestrator | ec587fb56e43 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-02-23 21:00:25.725819 | orchestrator | 701739d73676 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2026-02-23 21:00:25.725837 | orchestrator | e07c0c705c87 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-02-23 21:00:25.725844 | orchestrator | 641fd89a8bac registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-02-23 21:00:25.725850 | orchestrator | 0931f24e45c4 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-02-23 21:00:25.725856 | orchestrator | 1d3e6433dbcd registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-02-23 21:00:25.725863 | orchestrator | 890d3270e2a8 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-02-23 21:00:25.725869 | orchestrator | f4195d5bcd30 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-02-23 21:00:25.725879 | orchestrator | 7a8503815492 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-02-23 21:00:25.725886 | orchestrator | ceaf200aa6cf registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-02-23 21:00:25.725893 | orchestrator | 10886072f0cd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-mgr-testbed-node-1 2026-02-23 21:00:25.725899 | orchestrator | c7417042d111 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) barbican_worker 2026-02-23 21:00:25.725905 | orchestrator | 7c597c6f77e1 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-02-23 21:00:25.725912 | orchestrator | 5c6cee6aec20 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-02-23 21:00:25.725919 | orchestrator | a744a7ec6fd4 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-02-23 21:00:25.725925 | orchestrator | ea0bb13b38ff registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-02-23 21:00:25.725932 | orchestrator | c2424b748211 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-02-23 21:00:25.725939 | orchestrator | fab5d963ab52 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-02-23 21:00:25.725946 | orchestrator | 23579a2a2bce registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-02-23 21:00:25.725950 | orchestrator | 92188b0b4545 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-02-23 21:00:25.725961 | orchestrator | d8c52220eda7 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-02-23 21:00:25.725965 | orchestrator | d6e85db8e5bb registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-02-23 21:00:25.725968 | orchestrator | a612c4825865 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-02-23 21:00:25.725972 | orchestrator | f076be6da6a7 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-02-23 21:00:25.725976 | orchestrator | 50590380e73d registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-02-23 21:00:25.725984 | orchestrator | 4929a2f62ab7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-02-23 21:00:25.725988 | orchestrator | ee1fa7b15153 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-02-23 21:00:25.725992 | orchestrator | 3f4624ed3c23 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-02-23 21:00:25.725996 | orchestrator | 545458a73d5f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2026-02-23 21:00:25.725999 | orchestrator | 537cf48a2d1c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-02-23 21:00:25.726003 | orchestrator | 43374694273c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-02-23 21:00:25.726007 | orchestrator | 38346c98a66b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-02-23 21:00:25.726011 | orchestrator | 92743330b010 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-02-23 21:00:25.726058 | orchestrator | 72a7d39ea1c0 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-02-23 21:00:25.726062 | orchestrator | 9fb481e3e0f0 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-02-23 21:00:25.726066 | orchestrator | 8197c51281a5 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-02-23 21:00:25.726070 | orchestrator | 0eb4c7c760a6 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-02-23 21:00:25.726074 | orchestrator | 9a4fe6fbb8cb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-02-23 21:00:25.726077 | orchestrator | c28ef4a3f3fc registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes kolla_toolbox 2026-02-23 21:00:25.726085 | orchestrator | 5fa16367a994 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-02-23 21:00:26.036050 | orchestrator | 2026-02-23 21:00:26.036183 | orchestrator | ## Images @ testbed-node-1 2026-02-23 21:00:26.036192 | orchestrator | 2026-02-23 21:00:26.036197 | orchestrator | + echo 2026-02-23 21:00:26.036202 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-23 21:00:26.036207 | orchestrator | + echo 2026-02-23 21:00:26.036212 | orchestrator | + osism container testbed-node-1 images 2026-02-23 21:00:28.484442 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-23 21:00:28.484544 | orchestrator | registry.osism.tech/osism/ceph-daemon reef ff59b29334dd 17 hours ago 1.27GB 2026-02-23 21:00:28.484560 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 3652d3a7de56 19 hours ago 272MB 2026-02-23 21:00:28.484571 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 dbe28c2fd6c3 19 hours ago 418MB 2026-02-23 21:00:28.484582 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f894a5bc6d0b 19 hours ago 673MB 2026-02-23 21:00:28.484590 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 b1fd4c643197 19 hours ago 279MB 2026-02-23 21:00:28.484597 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5b921837d137 19 hours ago 271MB 2026-02-23 21:00:28.484603 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 540f6409fbf5 19 hours ago 328MB 2026-02-23 21:00:28.484610 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 13679e653294 19 hours ago 282MB 2026-02-23 21:00:28.484616 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 37fbf97e7f6f 19 hours ago 1.02GB 2026-02-23 21:00:28.484622 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 743bb34b6cee 19 hours ago 584MB 2026-02-23 21:00:28.484628 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 c159d79f20f0 19 hours ago 1.53GB 2026-02-23 21:00:28.484634 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 80d69bb5f098 19 hours ago 1.56GB 2026-02-23 21:00:28.484640 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 4d41d184aa52 19 hours ago 284MB 2026-02-23 21:00:28.484647 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 7e08fe973c8e 19 hours ago 284MB 2026-02-23 21:00:28.484653 | orchestrator | registry.osism.tech/kolla/redis 2024.2 0fa4d8b23966 19 hours ago 278MB 2026-02-23 21:00:28.484659 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 2052a297db29 19 hours ago 278MB 2026-02-23 21:00:28.484666 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 4d92e9384e36 19 hours ago 1.15GB 2026-02-23 21:00:28.484672 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 5dbd7b932e63 19 hours ago 457MB 2026-02-23 21:00:28.484678 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 278eaa73d02c 19 hours ago 311MB 2026-02-23 21:00:28.484684 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 13638ae36e57 19 hours ago 297MB 2026-02-23 21:00:28.484690 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 8ff5a6b3cd16 19 hours ago 304MB 2026-02-23 21:00:28.484696 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d9cb73d898bb 19 hours ago 306MB 2026-02-23 21:00:28.484702 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 4e4cb8e72dfc 19 hours ago 363MB 2026-02-23 21:00:28.484709 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 51d6ef378bbd 19 hours ago 846MB 2026-02-23 21:00:28.484913 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 25bb84f2289b 19 hours ago 846MB 2026-02-23 21:00:28.485002 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 342e8da8ac3a 19 hours ago 846MB 2026-02-23 21:00:28.485017 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 47dae0f63bd5 19 hours ago 846MB 2026-02-23 21:00:28.485027 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 189daad12150 19 hours ago 1.17GB 2026-02-23 21:00:28.485037 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 624955c2c115 19 hours ago 1.06GB 2026-02-23 21:00:28.485047 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 9b80c75ebf88 19 hours ago 1.03GB 2026-02-23 21:00:28.485058 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3f4a0924d9ed 19 hours ago 1.06GB 2026-02-23 21:00:28.485067 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 741914d697fa 19 hours ago 1.03GB 2026-02-23 21:00:28.485077 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 45d576cfc14a 19 hours ago 1.03GB 2026-02-23 21:00:28.485153 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 db0ba08b76e7 19 hours ago 1.42GB 2026-02-23 21:00:28.485170 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 aad4ba476f1c 19 hours ago 1.72GB 2026-02-23 21:00:28.485182 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3102035cbd3f 19 hours ago 1.41GB 2026-02-23 21:00:28.485192 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffa30f3933ed 19 hours ago 1.41GB 2026-02-23 21:00:28.485202 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 e4b48b9cb79e 19 hours ago 981MB 2026-02-23 21:00:28.485214 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 e3b7812cd6ff 19 hours ago 990MB 2026-02-23 21:00:28.485226 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8ab2ea11494a 19 hours ago 989MB 2026-02-23 21:00:28.485237 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 13bd17ae9e62 19 hours ago 994MB 2026-02-23 21:00:28.485247 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2e44006b8d8c 19 hours ago 990MB 2026-02-23 21:00:28.485259 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 7b60bbae1ff0 19 hours ago 994MB 2026-02-23 21:00:28.485271 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 a28e142a95bf 19 hours ago 990MB 2026-02-23 21:00:28.485282 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0eec7e0f7b33 19 hours ago 996MB 2026-02-23 21:00:28.485293 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 8c429b2daef4 19 hours ago 996MB 2026-02-23 21:00:28.485304 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9d4b161f78ee 19 hours ago 996MB 2026-02-23 21:00:28.485316 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 3770c4e50a62 19 hours ago 1.04GB 2026-02-23 21:00:28.485327 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a75dcc713ad6 19 hours ago 1.05GB 2026-02-23 21:00:28.485338 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f3afb1708fa9 19 hours ago 1.07GB 2026-02-23 21:00:28.485349 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 7e0dfe6dfac8 19 hours ago 1.22GB 2026-02-23 21:00:28.485361 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b141835a8ce3 19 hours ago 1.22GB 2026-02-23 21:00:28.485374 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0e230dfe273c 19 hours ago 1.37GB 2026-02-23 21:00:28.485384 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 1d5f182c3a75 19 hours ago 1.22GB 2026-02-23 21:00:28.485392 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 5f8755d26e30 19 hours ago 1.13GB 2026-02-23 21:00:28.485409 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 ad0e7877ae1a 19 hours ago 1.25GB 2026-02-23 21:00:28.485416 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4e1be079f392 19 hours ago 1.1GB 2026-02-23 21:00:28.806588 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-23 21:00:28.806638 | orchestrator | ++ semver latest 5.0.0 2026-02-23 21:00:28.859143 | orchestrator | 2026-02-23 21:00:28.859194 | orchestrator | ## Containers @ testbed-node-2 2026-02-23 21:00:28.859201 | orchestrator | 2026-02-23 21:00:28.859206 | orchestrator | + [[ -1 -eq -1 ]] 2026-02-23 21:00:28.859212 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 21:00:28.859217 | orchestrator | + echo 2026-02-23 21:00:28.859222 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-23 21:00:28.859227 | orchestrator | + echo 2026-02-23 21:00:28.859232 | orchestrator | + osism container testbed-node-2 ps 2026-02-23 21:00:31.237188 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-23 21:00:31.237262 | orchestrator | 06b4501fddf0 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-02-23 21:00:31.237272 | orchestrator | 16e2fe9b6d86 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-02-23 21:00:31.237278 | orchestrator | 74add7175258 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-02-23 21:00:31.237284 | orchestrator | 990f002c9cab registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-02-23 21:00:31.237290 | orchestrator | 928d60ebeb4d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2026-02-23 21:00:31.237295 | orchestrator | 96584170477d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-02-23 21:00:31.237301 | orchestrator | 6a4b83421356 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-02-23 21:00:31.237307 | orchestrator | 32eb5b69b8e2 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-02-23 21:00:31.237313 | orchestrator | 8655b1d774dd registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-02-23 21:00:31.237318 | orchestrator | fa1d49a1da28 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes grafana 2026-02-23 21:00:31.237324 | orchestrator | 01a8062a0d6d registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-02-23 21:00:31.237330 | orchestrator | 7e84185caddc registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-02-23 21:00:31.237335 | orchestrator | d963b41365f6 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-02-23 21:00:31.237341 | orchestrator | 1da202345403 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_api 2026-02-23 21:00:31.237362 | orchestrator | be111229c568 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-02-23 21:00:31.237385 | orchestrator | bf40e44a4c3d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-02-23 21:00:31.237392 | orchestrator | 63d6fb9e537e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-02-23 21:00:31.237398 | orchestrator | 62d62227baa9 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-02-23 21:00:31.237404 | orchestrator | 9fa5b22b29bb registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-02-23 21:00:31.237410 | orchestrator | 1dbced3c9d26 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-02-23 21:00:31.237416 | orchestrator | 51d6cd63b3cf registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-02-23 21:00:31.237435 | orchestrator | ecd833167992 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-02-23 21:00:31.237441 | orchestrator | d46d277adc3f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-02-23 21:00:31.237447 | orchestrator | 952227abcd17 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-02-23 21:00:31.237455 | orchestrator | b3606ae009f2 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-02-23 21:00:31.237461 | orchestrator | a07e62a335d0 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-02-23 21:00:31.237467 | orchestrator | 56e26c8dbf39 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-02-23 21:00:31.237473 | orchestrator | 65c9529434d3 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-02-23 21:00:31.237479 | orchestrator | a6d0bdb2f6a7 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-02-23 21:00:31.237485 | orchestrator | 0b79840a253a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-02-23 21:00:31.237490 | orchestrator | 62f756b8b615 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-02-23 21:00:31.237496 | orchestrator | cef2e59bc3b8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-02-23 21:00:31.237503 | orchestrator | 7e4e58e70baf registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-02-23 21:00:31.237508 | orchestrator | 50e68354bc6c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-02-23 21:00:31.237521 | orchestrator | eb8e82251df1 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-02-23 21:00:31.237528 | orchestrator | d59d3da73cf5 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-02-23 21:00:31.237534 | orchestrator | fe9dbd3f0ee5 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-02-23 21:00:31.237541 | orchestrator | f54a3cdef333 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-02-23 21:00:31.237547 | orchestrator | 8da33485d4b4 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-02-23 21:00:31.237551 | orchestrator | b11738b2c532 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-02-23 21:00:31.237555 | orchestrator | fd25d172ab7a registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-02-23 21:00:31.237559 | orchestrator | 30f38c07b802 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-02-23 21:00:31.237563 | orchestrator | e62bbb2dcdfd registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-02-23 21:00:31.237566 | orchestrator | c8d721e96fb2 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-02-23 21:00:31.237574 | orchestrator | 9e8a0a650067 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-02-23 21:00:31.237579 | orchestrator | e3b3cefe63d5 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2026-02-23 21:00:31.237583 | orchestrator | 6240db05fe39 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2026-02-23 21:00:31.237586 | orchestrator | b232bf2f05dc registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2026-02-23 21:00:31.237590 | orchestrator | 7f46d30f008c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-02-23 21:00:31.237598 | orchestrator | 65a0108552e1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-02-23 21:00:31.237602 | orchestrator | fce2d54ca435 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2026-02-23 21:00:31.237606 | orchestrator | 8cf4996cd8eb registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-02-23 21:00:31.237609 | orchestrator | def9cafe0d6f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-02-23 21:00:31.237613 | orchestrator | b630e25559d7 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-02-23 21:00:31.237620 | orchestrator | 6865c332f2ef registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-02-23 21:00:31.237624 | orchestrator | 0ce7f73609b5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-02-23 21:00:31.237628 | orchestrator | 5383edd35d47 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-02-23 21:00:31.237631 | orchestrator | 20a8a06d4d22 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-02-23 21:00:31.237635 | orchestrator | f2cb0a59ec83 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-02-23 21:00:31.433562 | orchestrator | 2026-02-23 21:00:31.433652 | orchestrator | ## Images @ testbed-node-2 2026-02-23 21:00:31.433663 | orchestrator | 2026-02-23 21:00:31.433669 | orchestrator | + echo 2026-02-23 21:00:31.433675 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-23 21:00:31.433682 | orchestrator | + echo 2026-02-23 21:00:31.433689 | orchestrator | + osism container testbed-node-2 images 2026-02-23 21:00:33.575493 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-23 21:00:33.575568 | orchestrator | registry.osism.tech/osism/ceph-daemon reef ff59b29334dd 17 hours ago 1.27GB 2026-02-23 21:00:33.575574 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 3652d3a7de56 19 hours ago 272MB 2026-02-23 21:00:33.575592 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 dbe28c2fd6c3 19 hours ago 418MB 2026-02-23 21:00:33.575596 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f894a5bc6d0b 19 hours ago 673MB 2026-02-23 21:00:33.575600 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 b1fd4c643197 19 hours ago 279MB 2026-02-23 21:00:33.575604 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5b921837d137 19 hours ago 271MB 2026-02-23 21:00:33.575608 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 540f6409fbf5 19 hours ago 328MB 2026-02-23 21:00:33.575612 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 13679e653294 19 hours ago 282MB 2026-02-23 21:00:33.575616 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 37fbf97e7f6f 19 hours ago 1.02GB 2026-02-23 21:00:33.575620 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 743bb34b6cee 19 hours ago 584MB 2026-02-23 21:00:33.575624 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 c159d79f20f0 19 hours ago 1.53GB 2026-02-23 21:00:33.575628 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 80d69bb5f098 19 hours ago 1.56GB 2026-02-23 21:00:33.575631 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 7e08fe973c8e 19 hours ago 284MB 2026-02-23 21:00:33.575635 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 4d41d184aa52 19 hours ago 284MB 2026-02-23 21:00:33.575639 | orchestrator | registry.osism.tech/kolla/redis 2024.2 0fa4d8b23966 19 hours ago 278MB 2026-02-23 21:00:33.575643 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 2052a297db29 19 hours ago 278MB 2026-02-23 21:00:33.575646 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 4d92e9384e36 19 hours ago 1.15GB 2026-02-23 21:00:33.575652 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 5dbd7b932e63 19 hours ago 457MB 2026-02-23 21:00:33.575658 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 278eaa73d02c 19 hours ago 311MB 2026-02-23 21:00:33.575681 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 13638ae36e57 19 hours ago 297MB 2026-02-23 21:00:33.575687 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 8ff5a6b3cd16 19 hours ago 304MB 2026-02-23 21:00:33.575693 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d9cb73d898bb 19 hours ago 306MB 2026-02-23 21:00:33.575702 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 4e4cb8e72dfc 19 hours ago 363MB 2026-02-23 21:00:33.575709 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 51d6ef378bbd 19 hours ago 846MB 2026-02-23 21:00:33.575715 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 25bb84f2289b 19 hours ago 846MB 2026-02-23 21:00:33.575721 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 342e8da8ac3a 19 hours ago 846MB 2026-02-23 21:00:33.575729 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 47dae0f63bd5 19 hours ago 846MB 2026-02-23 21:00:33.575735 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 189daad12150 19 hours ago 1.17GB 2026-02-23 21:00:33.575741 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 624955c2c115 19 hours ago 1.06GB 2026-02-23 21:00:33.575747 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 9b80c75ebf88 19 hours ago 1.03GB 2026-02-23 21:00:33.575754 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3f4a0924d9ed 19 hours ago 1.06GB 2026-02-23 21:00:33.575760 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 741914d697fa 19 hours ago 1.03GB 2026-02-23 21:00:33.575766 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 45d576cfc14a 19 hours ago 1.03GB 2026-02-23 21:00:33.575772 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 db0ba08b76e7 19 hours ago 1.42GB 2026-02-23 21:00:33.575778 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 aad4ba476f1c 19 hours ago 1.72GB 2026-02-23 21:00:33.575785 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3102035cbd3f 19 hours ago 1.41GB 2026-02-23 21:00:33.575803 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffa30f3933ed 19 hours ago 1.41GB 2026-02-23 21:00:33.575810 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 e4b48b9cb79e 19 hours ago 981MB 2026-02-23 21:00:33.575816 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 e3b7812cd6ff 19 hours ago 990MB 2026-02-23 21:00:33.575822 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8ab2ea11494a 19 hours ago 989MB 2026-02-23 21:00:33.575828 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 13bd17ae9e62 19 hours ago 994MB 2026-02-23 21:00:33.575835 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2e44006b8d8c 19 hours ago 990MB 2026-02-23 21:00:33.575840 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 7b60bbae1ff0 19 hours ago 994MB 2026-02-23 21:00:33.575846 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 a28e142a95bf 19 hours ago 990MB 2026-02-23 21:00:33.575869 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0eec7e0f7b33 19 hours ago 996MB 2026-02-23 21:00:33.575875 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 8c429b2daef4 19 hours ago 996MB 2026-02-23 21:00:33.575881 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9d4b161f78ee 19 hours ago 996MB 2026-02-23 21:00:33.575887 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 3770c4e50a62 19 hours ago 1.04GB 2026-02-23 21:00:33.575900 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a75dcc713ad6 19 hours ago 1.05GB 2026-02-23 21:00:33.575912 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f3afb1708fa9 19 hours ago 1.07GB 2026-02-23 21:00:33.575916 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 7e0dfe6dfac8 19 hours ago 1.22GB 2026-02-23 21:00:33.575920 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b141835a8ce3 19 hours ago 1.22GB 2026-02-23 21:00:33.575924 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0e230dfe273c 19 hours ago 1.37GB 2026-02-23 21:00:33.575928 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 1d5f182c3a75 19 hours ago 1.22GB 2026-02-23 21:00:33.575932 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 5f8755d26e30 19 hours ago 1.13GB 2026-02-23 21:00:33.575935 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 ad0e7877ae1a 19 hours ago 1.25GB 2026-02-23 21:00:33.575939 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4e1be079f392 19 hours ago 1.1GB 2026-02-23 21:00:33.781062 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-23 21:00:33.787656 | orchestrator | + set -e 2026-02-23 21:00:33.787776 | orchestrator | + source /opt/manager-vars.sh 2026-02-23 21:00:33.788422 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-23 21:00:33.788471 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-23 21:00:33.788500 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-23 21:00:33.789250 | orchestrator | ++ CEPH_VERSION=reef 2026-02-23 21:00:33.789286 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-23 21:00:33.789297 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-23 21:00:33.789306 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 21:00:33.789313 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 21:00:33.789319 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-23 21:00:33.789326 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-23 21:00:33.789332 | orchestrator | ++ export ARA=false 2026-02-23 21:00:33.789342 | orchestrator | ++ ARA=false 2026-02-23 21:00:33.789348 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-23 21:00:33.789354 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-23 21:00:33.789361 | orchestrator | ++ export TEMPEST=false 2026-02-23 21:00:33.789367 | orchestrator | ++ TEMPEST=false 2026-02-23 21:00:33.789374 | orchestrator | ++ export IS_ZUUL=true 2026-02-23 21:00:33.789380 | orchestrator | ++ IS_ZUUL=true 2026-02-23 21:00:33.789386 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 21:00:33.789392 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 21:00:33.789399 | orchestrator | ++ export EXTERNAL_API=false 2026-02-23 21:00:33.789406 | orchestrator | ++ EXTERNAL_API=false 2026-02-23 21:00:33.789412 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-23 21:00:33.789418 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-23 21:00:33.789424 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-23 21:00:33.789431 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-23 21:00:33.789437 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-23 21:00:33.789443 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-23 21:00:33.789450 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-23 21:00:33.789456 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-23 21:00:33.794892 | orchestrator | + set -e 2026-02-23 21:00:33.795007 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-23 21:00:33.795017 | orchestrator | ++ export INTERACTIVE=false 2026-02-23 21:00:33.795025 | orchestrator | ++ INTERACTIVE=false 2026-02-23 21:00:33.795033 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-23 21:00:33.795041 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-23 21:00:33.795047 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-23 21:00:33.795591 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-23 21:00:33.801433 | orchestrator | 2026-02-23 21:00:33.801545 | orchestrator | # Ceph status 2026-02-23 21:00:33.801556 | orchestrator | 2026-02-23 21:00:33.801564 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 21:00:33.801572 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 21:00:33.801579 | orchestrator | + echo 2026-02-23 21:00:33.801585 | orchestrator | + echo '# Ceph status' 2026-02-23 21:00:33.801592 | orchestrator | + echo 2026-02-23 21:00:33.801598 | orchestrator | + ceph -s 2026-02-23 21:00:34.322533 | orchestrator | cluster: 2026-02-23 21:00:34.322634 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-23 21:00:34.322643 | orchestrator | health: HEALTH_OK 2026-02-23 21:00:34.322650 | orchestrator | 2026-02-23 21:00:34.322656 | orchestrator | services: 2026-02-23 21:00:34.322663 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2026-02-23 21:00:34.322671 | orchestrator | mgr: testbed-node-0(active, since 14m), standbys: testbed-node-1, testbed-node-2 2026-02-23 21:00:34.322679 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-23 21:00:34.322686 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2026-02-23 21:00:34.322692 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-23 21:00:34.322698 | orchestrator | 2026-02-23 21:00:34.322704 | orchestrator | data: 2026-02-23 21:00:34.322711 | orchestrator | volumes: 1/1 healthy 2026-02-23 21:00:34.322717 | orchestrator | pools: 14 pools, 401 pgs 2026-02-23 21:00:34.322723 | orchestrator | objects: 524 objects, 2.2 GiB 2026-02-23 21:00:34.322730 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-23 21:00:34.322736 | orchestrator | pgs: 401 active+clean 2026-02-23 21:00:34.322743 | orchestrator | 2026-02-23 21:00:34.355766 | orchestrator | 2026-02-23 21:00:34.355845 | orchestrator | # Ceph versions 2026-02-23 21:00:34.355854 | orchestrator | 2026-02-23 21:00:34.355861 | orchestrator | + echo 2026-02-23 21:00:34.355868 | orchestrator | + echo '# Ceph versions' 2026-02-23 21:00:34.355876 | orchestrator | + echo 2026-02-23 21:00:34.355882 | orchestrator | + ceph versions 2026-02-23 21:00:34.888186 | orchestrator | { 2026-02-23 21:00:34.888260 | orchestrator | "mon": { 2026-02-23 21:00:34.888266 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-23 21:00:34.888271 | orchestrator | }, 2026-02-23 21:00:34.888275 | orchestrator | "mgr": { 2026-02-23 21:00:34.888290 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-23 21:00:34.888294 | orchestrator | }, 2026-02-23 21:00:34.888298 | orchestrator | "osd": { 2026-02-23 21:00:34.888302 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-23 21:00:34.888305 | orchestrator | }, 2026-02-23 21:00:34.888309 | orchestrator | "mds": { 2026-02-23 21:00:34.888313 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-23 21:00:34.888322 | orchestrator | }, 2026-02-23 21:00:34.888330 | orchestrator | "rgw": { 2026-02-23 21:00:34.888334 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-23 21:00:34.888338 | orchestrator | }, 2026-02-23 21:00:34.888342 | orchestrator | "overall": { 2026-02-23 21:00:34.888345 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-23 21:00:34.888349 | orchestrator | } 2026-02-23 21:00:34.888353 | orchestrator | } 2026-02-23 21:00:34.919722 | orchestrator | 2026-02-23 21:00:34.919796 | orchestrator | + echo 2026-02-23 21:00:34.919804 | orchestrator | + echo '# Ceph OSD tree' 2026-02-23 21:00:34.920270 | orchestrator | # Ceph OSD tree 2026-02-23 21:00:34.920280 | orchestrator | 2026-02-23 21:00:34.920285 | orchestrator | + echo 2026-02-23 21:00:34.920289 | orchestrator | + ceph osd df tree 2026-02-23 21:00:35.403844 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-23 21:00:35.403933 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-02-23 21:00:35.403941 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-02-23 21:00:35.403949 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.16 1.21 201 up osd.0 2026-02-23 21:00:35.403956 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 956 MiB 883 MiB 1 KiB 74 MiB 19 GiB 4.67 0.79 189 up osd.5 2026-02-23 21:00:35.403962 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-02-23 21:00:35.403969 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.73 1.31 203 up osd.2 2026-02-23 21:00:35.403975 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 840 MiB 771 MiB 1 KiB 70 MiB 19 GiB 4.11 0.69 189 up osd.4 2026-02-23 21:00:35.404006 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-02-23 21:00:35.404012 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.67 1.13 184 up osd.1 2026-02-23 21:00:35.404019 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 70 MiB 19 GiB 5.17 0.87 204 up osd.3 2026-02-23 21:00:35.404026 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-02-23 21:00:35.404032 | orchestrator | MIN/MAX VAR: 0.69/1.31 STDDEV: 1.34 2026-02-23 21:00:35.439493 | orchestrator | 2026-02-23 21:00:35.439567 | orchestrator | # Ceph monitor status 2026-02-23 21:00:35.439576 | orchestrator | 2026-02-23 21:00:35.439584 | orchestrator | + echo 2026-02-23 21:00:35.439591 | orchestrator | + echo '# Ceph monitor status' 2026-02-23 21:00:35.439598 | orchestrator | + echo 2026-02-23 21:00:35.439605 | orchestrator | + ceph mon stat 2026-02-23 21:00:35.969491 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-23 21:00:36.002814 | orchestrator | 2026-02-23 21:00:36.002868 | orchestrator | # Ceph quorum status 2026-02-23 21:00:36.002876 | orchestrator | 2026-02-23 21:00:36.002883 | orchestrator | + echo 2026-02-23 21:00:36.002890 | orchestrator | + echo '# Ceph quorum status' 2026-02-23 21:00:36.002896 | orchestrator | + echo 2026-02-23 21:00:36.003588 | orchestrator | + ceph quorum_status 2026-02-23 21:00:36.003619 | orchestrator | + jq 2026-02-23 21:00:36.567193 | orchestrator | { 2026-02-23 21:00:36.567308 | orchestrator | "election_epoch": 8, 2026-02-23 21:00:36.567318 | orchestrator | "quorum": [ 2026-02-23 21:00:36.567322 | orchestrator | 0, 2026-02-23 21:00:36.567326 | orchestrator | 1, 2026-02-23 21:00:36.567329 | orchestrator | 2 2026-02-23 21:00:36.567333 | orchestrator | ], 2026-02-23 21:00:36.567337 | orchestrator | "quorum_names": [ 2026-02-23 21:00:36.567341 | orchestrator | "testbed-node-0", 2026-02-23 21:00:36.567345 | orchestrator | "testbed-node-1", 2026-02-23 21:00:36.567349 | orchestrator | "testbed-node-2" 2026-02-23 21:00:36.567353 | orchestrator | ], 2026-02-23 21:00:36.567357 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-23 21:00:36.567362 | orchestrator | "quorum_age": 1623, 2026-02-23 21:00:36.567365 | orchestrator | "features": { 2026-02-23 21:00:36.567369 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-23 21:00:36.567373 | orchestrator | "quorum_mon": [ 2026-02-23 21:00:36.567377 | orchestrator | "kraken", 2026-02-23 21:00:36.567381 | orchestrator | "luminous", 2026-02-23 21:00:36.567384 | orchestrator | "mimic", 2026-02-23 21:00:36.567388 | orchestrator | "osdmap-prune", 2026-02-23 21:00:36.567392 | orchestrator | "nautilus", 2026-02-23 21:00:36.567396 | orchestrator | "octopus", 2026-02-23 21:00:36.567399 | orchestrator | "pacific", 2026-02-23 21:00:36.567403 | orchestrator | "elector-pinging", 2026-02-23 21:00:36.567407 | orchestrator | "quincy", 2026-02-23 21:00:36.567410 | orchestrator | "reef" 2026-02-23 21:00:36.567414 | orchestrator | ] 2026-02-23 21:00:36.567418 | orchestrator | }, 2026-02-23 21:00:36.567422 | orchestrator | "monmap": { 2026-02-23 21:00:36.567425 | orchestrator | "epoch": 1, 2026-02-23 21:00:36.567429 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-23 21:00:36.567433 | orchestrator | "modified": "2026-02-23T20:33:16.873709Z", 2026-02-23 21:00:36.567437 | orchestrator | "created": "2026-02-23T20:33:16.873709Z", 2026-02-23 21:00:36.567440 | orchestrator | "min_mon_release": 18, 2026-02-23 21:00:36.567444 | orchestrator | "min_mon_release_name": "reef", 2026-02-23 21:00:36.567448 | orchestrator | "election_strategy": 1, 2026-02-23 21:00:36.567451 | orchestrator | "disallowed_leaders: ": "", 2026-02-23 21:00:36.567455 | orchestrator | "stretch_mode": false, 2026-02-23 21:00:36.567459 | orchestrator | "tiebreaker_mon": "", 2026-02-23 21:00:36.567462 | orchestrator | "removed_ranks: ": "", 2026-02-23 21:00:36.567466 | orchestrator | "features": { 2026-02-23 21:00:36.567470 | orchestrator | "persistent": [ 2026-02-23 21:00:36.567474 | orchestrator | "kraken", 2026-02-23 21:00:36.567478 | orchestrator | "luminous", 2026-02-23 21:00:36.567481 | orchestrator | "mimic", 2026-02-23 21:00:36.567485 | orchestrator | "osdmap-prune", 2026-02-23 21:00:36.567505 | orchestrator | "nautilus", 2026-02-23 21:00:36.567509 | orchestrator | "octopus", 2026-02-23 21:00:36.567513 | orchestrator | "pacific", 2026-02-23 21:00:36.567516 | orchestrator | "elector-pinging", 2026-02-23 21:00:36.567520 | orchestrator | "quincy", 2026-02-23 21:00:36.567524 | orchestrator | "reef" 2026-02-23 21:00:36.567527 | orchestrator | ], 2026-02-23 21:00:36.567531 | orchestrator | "optional": [] 2026-02-23 21:00:36.567535 | orchestrator | }, 2026-02-23 21:00:36.567538 | orchestrator | "mons": [ 2026-02-23 21:00:36.567542 | orchestrator | { 2026-02-23 21:00:36.567546 | orchestrator | "rank": 0, 2026-02-23 21:00:36.567550 | orchestrator | "name": "testbed-node-0", 2026-02-23 21:00:36.567553 | orchestrator | "public_addrs": { 2026-02-23 21:00:36.567557 | orchestrator | "addrvec": [ 2026-02-23 21:00:36.567561 | orchestrator | { 2026-02-23 21:00:36.567564 | orchestrator | "type": "v2", 2026-02-23 21:00:36.567568 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-23 21:00:36.567572 | orchestrator | "nonce": 0 2026-02-23 21:00:36.567576 | orchestrator | }, 2026-02-23 21:00:36.567579 | orchestrator | { 2026-02-23 21:00:36.567583 | orchestrator | "type": "v1", 2026-02-23 21:00:36.567587 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-23 21:00:36.567590 | orchestrator | "nonce": 0 2026-02-23 21:00:36.567594 | orchestrator | } 2026-02-23 21:00:36.567598 | orchestrator | ] 2026-02-23 21:00:36.567601 | orchestrator | }, 2026-02-23 21:00:36.567605 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-23 21:00:36.567609 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-23 21:00:36.567613 | orchestrator | "priority": 0, 2026-02-23 21:00:36.567616 | orchestrator | "weight": 0, 2026-02-23 21:00:36.567620 | orchestrator | "crush_location": "{}" 2026-02-23 21:00:36.567624 | orchestrator | }, 2026-02-23 21:00:36.567627 | orchestrator | { 2026-02-23 21:00:36.567631 | orchestrator | "rank": 1, 2026-02-23 21:00:36.567635 | orchestrator | "name": "testbed-node-1", 2026-02-23 21:00:36.567638 | orchestrator | "public_addrs": { 2026-02-23 21:00:36.567642 | orchestrator | "addrvec": [ 2026-02-23 21:00:36.567646 | orchestrator | { 2026-02-23 21:00:36.567650 | orchestrator | "type": "v2", 2026-02-23 21:00:36.567653 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-23 21:00:36.567657 | orchestrator | "nonce": 0 2026-02-23 21:00:36.567661 | orchestrator | }, 2026-02-23 21:00:36.567664 | orchestrator | { 2026-02-23 21:00:36.567668 | orchestrator | "type": "v1", 2026-02-23 21:00:36.567672 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-23 21:00:36.567675 | orchestrator | "nonce": 0 2026-02-23 21:00:36.567679 | orchestrator | } 2026-02-23 21:00:36.567683 | orchestrator | ] 2026-02-23 21:00:36.567687 | orchestrator | }, 2026-02-23 21:00:36.567701 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-23 21:00:36.567705 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-23 21:00:36.567709 | orchestrator | "priority": 0, 2026-02-23 21:00:36.567713 | orchestrator | "weight": 0, 2026-02-23 21:00:36.567716 | orchestrator | "crush_location": "{}" 2026-02-23 21:00:36.567720 | orchestrator | }, 2026-02-23 21:00:36.567724 | orchestrator | { 2026-02-23 21:00:36.567727 | orchestrator | "rank": 2, 2026-02-23 21:00:36.567731 | orchestrator | "name": "testbed-node-2", 2026-02-23 21:00:36.567735 | orchestrator | "public_addrs": { 2026-02-23 21:00:36.567739 | orchestrator | "addrvec": [ 2026-02-23 21:00:36.567742 | orchestrator | { 2026-02-23 21:00:36.567746 | orchestrator | "type": "v2", 2026-02-23 21:00:36.567750 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-23 21:00:36.567753 | orchestrator | "nonce": 0 2026-02-23 21:00:36.567757 | orchestrator | }, 2026-02-23 21:00:36.567761 | orchestrator | { 2026-02-23 21:00:36.567764 | orchestrator | "type": "v1", 2026-02-23 21:00:36.567769 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-23 21:00:36.567773 | orchestrator | "nonce": 0 2026-02-23 21:00:36.567777 | orchestrator | } 2026-02-23 21:00:36.567781 | orchestrator | ] 2026-02-23 21:00:36.567785 | orchestrator | }, 2026-02-23 21:00:36.567789 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-23 21:00:36.567794 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-23 21:00:36.567798 | orchestrator | "priority": 0, 2026-02-23 21:00:36.567802 | orchestrator | "weight": 0, 2026-02-23 21:00:36.567807 | orchestrator | "crush_location": "{}" 2026-02-23 21:00:36.567814 | orchestrator | } 2026-02-23 21:00:36.567818 | orchestrator | ] 2026-02-23 21:00:36.567822 | orchestrator | } 2026-02-23 21:00:36.567827 | orchestrator | } 2026-02-23 21:00:36.567899 | orchestrator | 2026-02-23 21:00:36.567904 | orchestrator | # Ceph free space status 2026-02-23 21:00:36.567908 | orchestrator | 2026-02-23 21:00:36.567912 | orchestrator | + echo 2026-02-23 21:00:36.567916 | orchestrator | + echo '# Ceph free space status' 2026-02-23 21:00:36.567920 | orchestrator | + echo 2026-02-23 21:00:36.567923 | orchestrator | + ceph df 2026-02-23 21:00:37.110301 | orchestrator | --- RAW STORAGE --- 2026-02-23 21:00:37.110380 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-23 21:00:37.110398 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-02-23 21:00:37.110405 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-02-23 21:00:37.110412 | orchestrator | 2026-02-23 21:00:37.110419 | orchestrator | --- POOLS --- 2026-02-23 21:00:37.110426 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-23 21:00:37.110434 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-23 21:00:37.110441 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-23 21:00:37.110447 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-23 21:00:37.110454 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-23 21:00:37.110460 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-23 21:00:37.110467 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-23 21:00:37.110473 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2026-02-23 21:00:37.110480 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-23 21:00:37.110486 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-23 21:00:37.110492 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-23 21:00:37.110499 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-23 21:00:37.110505 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-02-23 21:00:37.110511 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-23 21:00:37.110518 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-23 21:00:37.153213 | orchestrator | ++ semver latest 5.0.0 2026-02-23 21:00:37.202947 | orchestrator | + [[ -1 -eq -1 ]] 2026-02-23 21:00:37.203012 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-23 21:00:37.203018 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-23 21:00:37.203023 | orchestrator | + osism apply facts 2026-02-23 21:00:49.133542 | orchestrator | 2026-02-23 21:00:49 | INFO  | Prepare task for execution of facts. 2026-02-23 21:00:49.204265 | orchestrator | 2026-02-23 21:00:49 | INFO  | Task cc3b3164-562a-458e-a086-848e272e260a (facts) was prepared for execution. 2026-02-23 21:00:49.204325 | orchestrator | 2026-02-23 21:00:49 | INFO  | It takes a moment until task cc3b3164-562a-458e-a086-848e272e260a (facts) has been started and output is visible here. 2026-02-23 21:01:01.737207 | orchestrator | 2026-02-23 21:01:01.737271 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-23 21:01:01.737281 | orchestrator | 2026-02-23 21:01:01.737288 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-23 21:01:01.737295 | orchestrator | Monday 23 February 2026 21:00:53 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-23 21:01:01.737302 | orchestrator | ok: [testbed-manager] 2026-02-23 21:01:01.737309 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:01.737316 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:01.737323 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:01.737330 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:01:01.737336 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:01:01.737342 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:01:01.737349 | orchestrator | 2026-02-23 21:01:01.737355 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-23 21:01:01.737378 | orchestrator | Monday 23 February 2026 21:00:54 +0000 (0:00:01.203) 0:00:01.465 ******* 2026-02-23 21:01:01.737393 | orchestrator | skipping: [testbed-manager] 2026-02-23 21:01:01.737400 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:01.737407 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:01:01.737413 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:01:01.737420 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:01:01.737426 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:01:01.737433 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:01:01.737439 | orchestrator | 2026-02-23 21:01:01.737446 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-23 21:01:01.737453 | orchestrator | 2026-02-23 21:01:01.737459 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-23 21:01:01.737466 | orchestrator | Monday 23 February 2026 21:00:56 +0000 (0:00:01.456) 0:00:02.922 ******* 2026-02-23 21:01:01.737472 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:01.737479 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:01.737486 | orchestrator | ok: [testbed-manager] 2026-02-23 21:01:01.737492 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:01.737499 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:01:01.737505 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:01:01.737512 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:01:01.737518 | orchestrator | 2026-02-23 21:01:01.737525 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-23 21:01:01.737532 | orchestrator | 2026-02-23 21:01:01.737538 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-23 21:01:01.737545 | orchestrator | Monday 23 February 2026 21:01:00 +0000 (0:00:04.632) 0:00:07.554 ******* 2026-02-23 21:01:01.737552 | orchestrator | skipping: [testbed-manager] 2026-02-23 21:01:01.737558 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:01.737565 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:01:01.737571 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:01:01.737578 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:01:01.737584 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:01:01.737591 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:01:01.737598 | orchestrator | 2026-02-23 21:01:01.737604 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 21:01:01.737611 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737618 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737625 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737632 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737638 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737645 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737651 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:01.737658 | orchestrator | 2026-02-23 21:01:01.737665 | orchestrator | 2026-02-23 21:01:01.737671 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 21:01:01.737678 | orchestrator | Monday 23 February 2026 21:01:01 +0000 (0:00:00.568) 0:00:08.123 ******* 2026-02-23 21:01:01.737685 | orchestrator | =============================================================================== 2026-02-23 21:01:01.737696 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.63s 2026-02-23 21:01:01.737703 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-02-23 21:01:01.737710 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-02-23 21:01:01.737717 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-23 21:01:02.038947 | orchestrator | + osism validate ceph-mons 2026-02-23 21:01:34.215140 | orchestrator | 2026-02-23 21:01:34.215215 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-23 21:01:34.215222 | orchestrator | 2026-02-23 21:01:34.215226 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-23 21:01:34.215231 | orchestrator | Monday 23 February 2026 21:01:18 +0000 (0:00:00.431) 0:00:00.431 ******* 2026-02-23 21:01:34.215237 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:34.215243 | orchestrator | 2026-02-23 21:01:34.215250 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-23 21:01:34.215256 | orchestrator | Monday 23 February 2026 21:01:19 +0000 (0:00:00.824) 0:00:01.256 ******* 2026-02-23 21:01:34.215263 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:34.215269 | orchestrator | 2026-02-23 21:01:34.215276 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-23 21:01:34.215282 | orchestrator | Monday 23 February 2026 21:01:20 +0000 (0:00:00.965) 0:00:02.221 ******* 2026-02-23 21:01:34.215289 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215298 | orchestrator | 2026-02-23 21:01:34.215305 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-23 21:01:34.215310 | orchestrator | Monday 23 February 2026 21:01:20 +0000 (0:00:00.126) 0:00:02.347 ******* 2026-02-23 21:01:34.215314 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215318 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:34.215322 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:34.215326 | orchestrator | 2026-02-23 21:01:34.215330 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-23 21:01:34.215334 | orchestrator | Monday 23 February 2026 21:01:21 +0000 (0:00:00.297) 0:00:02.645 ******* 2026-02-23 21:01:34.215349 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:34.215353 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:34.215363 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215367 | orchestrator | 2026-02-23 21:01:34.215371 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-23 21:01:34.215375 | orchestrator | Monday 23 February 2026 21:01:22 +0000 (0:00:01.044) 0:00:03.689 ******* 2026-02-23 21:01:34.215379 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215383 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:01:34.215387 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:01:34.215391 | orchestrator | 2026-02-23 21:01:34.215395 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-23 21:01:34.215399 | orchestrator | Monday 23 February 2026 21:01:22 +0000 (0:00:00.287) 0:00:03.977 ******* 2026-02-23 21:01:34.215403 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215406 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:34.215410 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:34.215414 | orchestrator | 2026-02-23 21:01:34.215418 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:01:34.215422 | orchestrator | Monday 23 February 2026 21:01:22 +0000 (0:00:00.472) 0:00:04.449 ******* 2026-02-23 21:01:34.215425 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215429 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:34.215435 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:34.215442 | orchestrator | 2026-02-23 21:01:34.215450 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-23 21:01:34.215460 | orchestrator | Monday 23 February 2026 21:01:23 +0000 (0:00:00.312) 0:00:04.761 ******* 2026-02-23 21:01:34.215485 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215493 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:01:34.215534 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:01:34.215540 | orchestrator | 2026-02-23 21:01:34.215544 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-23 21:01:34.215548 | orchestrator | Monday 23 February 2026 21:01:23 +0000 (0:00:00.297) 0:00:05.059 ******* 2026-02-23 21:01:34.215552 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215555 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:01:34.215559 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:01:34.215563 | orchestrator | 2026-02-23 21:01:34.215592 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-23 21:01:34.215596 | orchestrator | Monday 23 February 2026 21:01:24 +0000 (0:00:00.519) 0:00:05.578 ******* 2026-02-23 21:01:34.215600 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215603 | orchestrator | 2026-02-23 21:01:34.215607 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-23 21:01:34.215611 | orchestrator | Monday 23 February 2026 21:01:24 +0000 (0:00:00.248) 0:00:05.827 ******* 2026-02-23 21:01:34.215614 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215619 | orchestrator | 2026-02-23 21:01:34.215623 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-23 21:01:34.215628 | orchestrator | Monday 23 February 2026 21:01:24 +0000 (0:00:00.251) 0:00:06.078 ******* 2026-02-23 21:01:34.215632 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215637 | orchestrator | 2026-02-23 21:01:34.215641 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:01:34.215645 | orchestrator | Monday 23 February 2026 21:01:24 +0000 (0:00:00.284) 0:00:06.363 ******* 2026-02-23 21:01:34.215649 | orchestrator | 2026-02-23 21:01:34.215654 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:01:34.215659 | orchestrator | Monday 23 February 2026 21:01:24 +0000 (0:00:00.083) 0:00:06.447 ******* 2026-02-23 21:01:34.215666 | orchestrator | 2026-02-23 21:01:34.215671 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:01:34.215679 | orchestrator | Monday 23 February 2026 21:01:25 +0000 (0:00:00.106) 0:00:06.554 ******* 2026-02-23 21:01:34.215687 | orchestrator | 2026-02-23 21:01:34.215695 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-23 21:01:34.215700 | orchestrator | Monday 23 February 2026 21:01:25 +0000 (0:00:00.084) 0:00:06.639 ******* 2026-02-23 21:01:34.215706 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215712 | orchestrator | 2026-02-23 21:01:34.215718 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-23 21:01:34.215724 | orchestrator | Monday 23 February 2026 21:01:25 +0000 (0:00:00.259) 0:00:06.898 ******* 2026-02-23 21:01:34.215730 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215736 | orchestrator | 2026-02-23 21:01:34.215757 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-23 21:01:34.215764 | orchestrator | Monday 23 February 2026 21:01:25 +0000 (0:00:00.239) 0:00:07.138 ******* 2026-02-23 21:01:34.215770 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215776 | orchestrator | 2026-02-23 21:01:34.215782 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-23 21:01:34.215803 | orchestrator | Monday 23 February 2026 21:01:25 +0000 (0:00:00.126) 0:00:07.264 ******* 2026-02-23 21:01:34.215816 | orchestrator | changed: [testbed-node-0] 2026-02-23 21:01:34.215821 | orchestrator | 2026-02-23 21:01:34.215825 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-23 21:01:34.215829 | orchestrator | Monday 23 February 2026 21:01:27 +0000 (0:00:01.489) 0:00:08.753 ******* 2026-02-23 21:01:34.215834 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215838 | orchestrator | 2026-02-23 21:01:34.215842 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-23 21:01:34.215847 | orchestrator | Monday 23 February 2026 21:01:27 +0000 (0:00:00.482) 0:00:09.236 ******* 2026-02-23 21:01:34.215858 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215862 | orchestrator | 2026-02-23 21:01:34.215867 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-23 21:01:34.215871 | orchestrator | Monday 23 February 2026 21:01:27 +0000 (0:00:00.143) 0:00:09.379 ******* 2026-02-23 21:01:34.215875 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215879 | orchestrator | 2026-02-23 21:01:34.215883 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-23 21:01:34.215891 | orchestrator | Monday 23 February 2026 21:01:28 +0000 (0:00:00.346) 0:00:09.726 ******* 2026-02-23 21:01:34.215895 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215898 | orchestrator | 2026-02-23 21:01:34.215902 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-23 21:01:34.215906 | orchestrator | Monday 23 February 2026 21:01:28 +0000 (0:00:00.312) 0:00:10.038 ******* 2026-02-23 21:01:34.215909 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.215913 | orchestrator | 2026-02-23 21:01:34.215917 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-23 21:01:34.215921 | orchestrator | Monday 23 February 2026 21:01:28 +0000 (0:00:00.117) 0:00:10.155 ******* 2026-02-23 21:01:34.215924 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215928 | orchestrator | 2026-02-23 21:01:34.215932 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-23 21:01:34.215936 | orchestrator | Monday 23 February 2026 21:01:28 +0000 (0:00:00.127) 0:00:10.283 ******* 2026-02-23 21:01:34.215939 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215943 | orchestrator | 2026-02-23 21:01:34.215947 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-23 21:01:34.215950 | orchestrator | Monday 23 February 2026 21:01:28 +0000 (0:00:00.131) 0:00:10.415 ******* 2026-02-23 21:01:34.215954 | orchestrator | changed: [testbed-node-0] 2026-02-23 21:01:34.215958 | orchestrator | 2026-02-23 21:01:34.215962 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-23 21:01:34.215965 | orchestrator | Monday 23 February 2026 21:01:30 +0000 (0:00:01.411) 0:00:11.826 ******* 2026-02-23 21:01:34.215969 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.215973 | orchestrator | 2026-02-23 21:01:34.215994 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-23 21:01:34.216003 | orchestrator | Monday 23 February 2026 21:01:30 +0000 (0:00:00.311) 0:00:12.138 ******* 2026-02-23 21:01:34.216010 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.216016 | orchestrator | 2026-02-23 21:01:34.216022 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-23 21:01:34.216028 | orchestrator | Monday 23 February 2026 21:01:30 +0000 (0:00:00.154) 0:00:12.292 ******* 2026-02-23 21:01:34.216033 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:01:34.216039 | orchestrator | 2026-02-23 21:01:34.216045 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-23 21:01:34.216051 | orchestrator | Monday 23 February 2026 21:01:30 +0000 (0:00:00.150) 0:00:12.442 ******* 2026-02-23 21:01:34.216057 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.216063 | orchestrator | 2026-02-23 21:01:34.216068 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-23 21:01:34.216074 | orchestrator | Monday 23 February 2026 21:01:31 +0000 (0:00:00.312) 0:00:12.755 ******* 2026-02-23 21:01:34.216079 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.216085 | orchestrator | 2026-02-23 21:01:34.216090 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-23 21:01:34.216096 | orchestrator | Monday 23 February 2026 21:01:31 +0000 (0:00:00.144) 0:00:12.899 ******* 2026-02-23 21:01:34.216103 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:34.216109 | orchestrator | 2026-02-23 21:01:34.216115 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-23 21:01:34.216121 | orchestrator | Monday 23 February 2026 21:01:31 +0000 (0:00:00.272) 0:00:13.172 ******* 2026-02-23 21:01:34.216134 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:01:34.216144 | orchestrator | 2026-02-23 21:01:34.216151 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-23 21:01:34.216156 | orchestrator | Monday 23 February 2026 21:01:31 +0000 (0:00:00.263) 0:00:13.436 ******* 2026-02-23 21:01:34.216162 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:34.216169 | orchestrator | 2026-02-23 21:01:34.216175 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-23 21:01:34.216182 | orchestrator | Monday 23 February 2026 21:01:33 +0000 (0:00:01.720) 0:00:15.156 ******* 2026-02-23 21:01:34.216188 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:34.216194 | orchestrator | 2026-02-23 21:01:34.216201 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-23 21:01:34.216208 | orchestrator | Monday 23 February 2026 21:01:33 +0000 (0:00:00.270) 0:00:15.426 ******* 2026-02-23 21:01:34.216212 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:34.216216 | orchestrator | 2026-02-23 21:01:34.216226 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:01:36.891071 | orchestrator | Monday 23 February 2026 21:01:34 +0000 (0:00:00.248) 0:00:15.675 ******* 2026-02-23 21:01:36.891129 | orchestrator | 2026-02-23 21:01:36.891138 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:01:36.891146 | orchestrator | Monday 23 February 2026 21:01:34 +0000 (0:00:00.073) 0:00:15.749 ******* 2026-02-23 21:01:36.891153 | orchestrator | 2026-02-23 21:01:36.891159 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:01:36.891165 | orchestrator | Monday 23 February 2026 21:01:34 +0000 (0:00:00.072) 0:00:15.821 ******* 2026-02-23 21:01:36.891172 | orchestrator | 2026-02-23 21:01:36.891178 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-23 21:01:36.891184 | orchestrator | Monday 23 February 2026 21:01:34 +0000 (0:00:00.074) 0:00:15.895 ******* 2026-02-23 21:01:36.891191 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:01:36.891197 | orchestrator | 2026-02-23 21:01:36.891203 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-23 21:01:36.891209 | orchestrator | Monday 23 February 2026 21:01:35 +0000 (0:00:01.431) 0:00:17.326 ******* 2026-02-23 21:01:36.891216 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-23 21:01:36.891223 | orchestrator |  "msg": [ 2026-02-23 21:01:36.891230 | orchestrator |  "Validator run completed.", 2026-02-23 21:01:36.891236 | orchestrator |  "You can find the report file here:", 2026-02-23 21:01:36.891243 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-23T21:01:19+00:00-report.json", 2026-02-23 21:01:36.891250 | orchestrator |  "on the following host:", 2026-02-23 21:01:36.891256 | orchestrator |  "testbed-manager" 2026-02-23 21:01:36.891262 | orchestrator |  ] 2026-02-23 21:01:36.891269 | orchestrator | } 2026-02-23 21:01:36.891275 | orchestrator | 2026-02-23 21:01:36.891281 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 21:01:36.891288 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-23 21:01:36.891295 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:36.891302 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:01:36.891308 | orchestrator | 2026-02-23 21:01:36.891315 | orchestrator | 2026-02-23 21:01:36.891321 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 21:01:36.891327 | orchestrator | Monday 23 February 2026 21:01:36 +0000 (0:00:00.769) 0:00:18.096 ******* 2026-02-23 21:01:36.891350 | orchestrator | =============================================================================== 2026-02-23 21:01:36.891356 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2026-02-23 21:01:36.891363 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.49s 2026-02-23 21:01:36.891369 | orchestrator | Write report file ------------------------------------------------------- 1.43s 2026-02-23 21:01:36.891376 | orchestrator | Gather status data ------------------------------------------------------ 1.41s 2026-02-23 21:01:36.891382 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2026-02-23 21:01:36.891388 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2026-02-23 21:01:36.891395 | orchestrator | Get timestamp for report file ------------------------------------------- 0.82s 2026-02-23 21:01:36.891401 | orchestrator | Print report file information ------------------------------------------- 0.77s 2026-02-23 21:01:36.891407 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.52s 2026-02-23 21:01:36.891413 | orchestrator | Set quorum test data ---------------------------------------------------- 0.48s 2026-02-23 21:01:36.891420 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-02-23 21:01:36.891426 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2026-02-23 21:01:36.891432 | orchestrator | Fail cluster-health if health is not acceptable (strict) ---------------- 0.31s 2026-02-23 21:01:36.891438 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-23 21:01:36.891444 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-23 21:01:36.891451 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-02-23 21:01:36.891457 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-02-23 21:01:36.891463 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-23 21:01:36.891470 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-02-23 21:01:36.891476 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-02-23 21:01:37.193456 | orchestrator | + osism validate ceph-mgrs 2026-02-23 21:02:07.694122 | orchestrator | 2026-02-23 21:02:07.694188 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-23 21:02:07.694198 | orchestrator | 2026-02-23 21:02:07.694206 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-23 21:02:07.694214 | orchestrator | Monday 23 February 2026 21:01:53 +0000 (0:00:00.413) 0:00:00.413 ******* 2026-02-23 21:02:07.694222 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.694229 | orchestrator | 2026-02-23 21:02:07.694236 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-23 21:02:07.694244 | orchestrator | Monday 23 February 2026 21:01:54 +0000 (0:00:00.738) 0:00:01.152 ******* 2026-02-23 21:02:07.694251 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.694258 | orchestrator | 2026-02-23 21:02:07.694266 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-23 21:02:07.694273 | orchestrator | Monday 23 February 2026 21:01:55 +0000 (0:00:00.862) 0:00:02.014 ******* 2026-02-23 21:02:07.694280 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694288 | orchestrator | 2026-02-23 21:02:07.694295 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-23 21:02:07.694310 | orchestrator | Monday 23 February 2026 21:01:55 +0000 (0:00:00.106) 0:00:02.121 ******* 2026-02-23 21:02:07.694319 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694326 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:02:07.694333 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:02:07.694340 | orchestrator | 2026-02-23 21:02:07.694347 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-23 21:02:07.694367 | orchestrator | Monday 23 February 2026 21:01:55 +0000 (0:00:00.264) 0:00:02.386 ******* 2026-02-23 21:02:07.694375 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694382 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:02:07.694390 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:02:07.694397 | orchestrator | 2026-02-23 21:02:07.694404 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-23 21:02:07.694412 | orchestrator | Monday 23 February 2026 21:01:56 +0000 (0:00:01.130) 0:00:03.517 ******* 2026-02-23 21:02:07.694419 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694426 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:02:07.694433 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:02:07.694441 | orchestrator | 2026-02-23 21:02:07.694450 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-23 21:02:07.694458 | orchestrator | Monday 23 February 2026 21:01:57 +0000 (0:00:00.294) 0:00:03.811 ******* 2026-02-23 21:02:07.694465 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694473 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:02:07.694480 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:02:07.694487 | orchestrator | 2026-02-23 21:02:07.694495 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:02:07.694502 | orchestrator | Monday 23 February 2026 21:01:57 +0000 (0:00:00.503) 0:00:04.315 ******* 2026-02-23 21:02:07.694510 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694517 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:02:07.694524 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:02:07.694531 | orchestrator | 2026-02-23 21:02:07.694538 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-23 21:02:07.694545 | orchestrator | Monday 23 February 2026 21:01:57 +0000 (0:00:00.304) 0:00:04.620 ******* 2026-02-23 21:02:07.694552 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694560 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:02:07.694567 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:02:07.694574 | orchestrator | 2026-02-23 21:02:07.694582 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-23 21:02:07.694589 | orchestrator | Monday 23 February 2026 21:01:58 +0000 (0:00:00.306) 0:00:04.926 ******* 2026-02-23 21:02:07.694597 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694604 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:02:07.694611 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:02:07.694619 | orchestrator | 2026-02-23 21:02:07.694626 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-23 21:02:07.694634 | orchestrator | Monday 23 February 2026 21:01:58 +0000 (0:00:00.472) 0:00:05.399 ******* 2026-02-23 21:02:07.694642 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694649 | orchestrator | 2026-02-23 21:02:07.694656 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-23 21:02:07.694664 | orchestrator | Monday 23 February 2026 21:01:58 +0000 (0:00:00.274) 0:00:05.673 ******* 2026-02-23 21:02:07.694672 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694679 | orchestrator | 2026-02-23 21:02:07.694687 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-23 21:02:07.694694 | orchestrator | Monday 23 February 2026 21:01:59 +0000 (0:00:00.267) 0:00:05.940 ******* 2026-02-23 21:02:07.694702 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694709 | orchestrator | 2026-02-23 21:02:07.694716 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:07.694724 | orchestrator | Monday 23 February 2026 21:01:59 +0000 (0:00:00.248) 0:00:06.189 ******* 2026-02-23 21:02:07.694731 | orchestrator | 2026-02-23 21:02:07.694739 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:07.694746 | orchestrator | Monday 23 February 2026 21:01:59 +0000 (0:00:00.071) 0:00:06.260 ******* 2026-02-23 21:02:07.694754 | orchestrator | 2026-02-23 21:02:07.694761 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:07.694774 | orchestrator | Monday 23 February 2026 21:01:59 +0000 (0:00:00.071) 0:00:06.332 ******* 2026-02-23 21:02:07.694782 | orchestrator | 2026-02-23 21:02:07.694789 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-23 21:02:07.694797 | orchestrator | Monday 23 February 2026 21:01:59 +0000 (0:00:00.074) 0:00:06.407 ******* 2026-02-23 21:02:07.694804 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694811 | orchestrator | 2026-02-23 21:02:07.694819 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-23 21:02:07.694827 | orchestrator | Monday 23 February 2026 21:01:59 +0000 (0:00:00.240) 0:00:06.648 ******* 2026-02-23 21:02:07.694834 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.694841 | orchestrator | 2026-02-23 21:02:07.694862 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-23 21:02:07.694869 | orchestrator | Monday 23 February 2026 21:02:00 +0000 (0:00:00.228) 0:00:06.876 ******* 2026-02-23 21:02:07.694876 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694883 | orchestrator | 2026-02-23 21:02:07.694890 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-23 21:02:07.694897 | orchestrator | Monday 23 February 2026 21:02:00 +0000 (0:00:00.110) 0:00:06.986 ******* 2026-02-23 21:02:07.694904 | orchestrator | changed: [testbed-node-0] 2026-02-23 21:02:07.694911 | orchestrator | 2026-02-23 21:02:07.694958 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-23 21:02:07.694966 | orchestrator | Monday 23 February 2026 21:02:02 +0000 (0:00:01.925) 0:00:08.912 ******* 2026-02-23 21:02:07.694972 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.694979 | orchestrator | 2026-02-23 21:02:07.694985 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-23 21:02:07.694992 | orchestrator | Monday 23 February 2026 21:02:02 +0000 (0:00:00.422) 0:00:09.335 ******* 2026-02-23 21:02:07.694998 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.695003 | orchestrator | 2026-02-23 21:02:07.695007 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-23 21:02:07.695011 | orchestrator | Monday 23 February 2026 21:02:02 +0000 (0:00:00.327) 0:00:09.662 ******* 2026-02-23 21:02:07.695015 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.695020 | orchestrator | 2026-02-23 21:02:07.695024 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-23 21:02:07.695028 | orchestrator | Monday 23 February 2026 21:02:03 +0000 (0:00:00.190) 0:00:09.853 ******* 2026-02-23 21:02:07.695033 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:02:07.695037 | orchestrator | 2026-02-23 21:02:07.695041 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-23 21:02:07.695045 | orchestrator | Monday 23 February 2026 21:02:03 +0000 (0:00:00.145) 0:00:09.999 ******* 2026-02-23 21:02:07.695050 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.695054 | orchestrator | 2026-02-23 21:02:07.695058 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-23 21:02:07.695065 | orchestrator | Monday 23 February 2026 21:02:03 +0000 (0:00:00.286) 0:00:10.285 ******* 2026-02-23 21:02:07.695069 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:02:07.695073 | orchestrator | 2026-02-23 21:02:07.695078 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-23 21:02:07.695082 | orchestrator | Monday 23 February 2026 21:02:03 +0000 (0:00:00.244) 0:00:10.530 ******* 2026-02-23 21:02:07.695086 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.695090 | orchestrator | 2026-02-23 21:02:07.695094 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-23 21:02:07.695098 | orchestrator | Monday 23 February 2026 21:02:05 +0000 (0:00:01.232) 0:00:11.763 ******* 2026-02-23 21:02:07.695102 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.695106 | orchestrator | 2026-02-23 21:02:07.695110 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-23 21:02:07.695118 | orchestrator | Monday 23 February 2026 21:02:05 +0000 (0:00:00.295) 0:00:12.058 ******* 2026-02-23 21:02:07.695122 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.695126 | orchestrator | 2026-02-23 21:02:07.695130 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:07.695134 | orchestrator | Monday 23 February 2026 21:02:05 +0000 (0:00:00.254) 0:00:12.313 ******* 2026-02-23 21:02:07.695138 | orchestrator | 2026-02-23 21:02:07.695142 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:07.695146 | orchestrator | Monday 23 February 2026 21:02:05 +0000 (0:00:00.069) 0:00:12.383 ******* 2026-02-23 21:02:07.695150 | orchestrator | 2026-02-23 21:02:07.695154 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:07.695158 | orchestrator | Monday 23 February 2026 21:02:05 +0000 (0:00:00.067) 0:00:12.451 ******* 2026-02-23 21:02:07.695162 | orchestrator | 2026-02-23 21:02:07.695166 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-23 21:02:07.695170 | orchestrator | Monday 23 February 2026 21:02:05 +0000 (0:00:00.248) 0:00:12.699 ******* 2026-02-23 21:02:07.695174 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:07.695178 | orchestrator | 2026-02-23 21:02:07.695182 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-23 21:02:07.695186 | orchestrator | Monday 23 February 2026 21:02:07 +0000 (0:00:01.305) 0:00:14.004 ******* 2026-02-23 21:02:07.695190 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-23 21:02:07.695194 | orchestrator |  "msg": [ 2026-02-23 21:02:07.695199 | orchestrator |  "Validator run completed.", 2026-02-23 21:02:07.695203 | orchestrator |  "You can find the report file here:", 2026-02-23 21:02:07.695207 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-23T21:01:54+00:00-report.json", 2026-02-23 21:02:07.695212 | orchestrator |  "on the following host:", 2026-02-23 21:02:07.695216 | orchestrator |  "testbed-manager" 2026-02-23 21:02:07.695220 | orchestrator |  ] 2026-02-23 21:02:07.695224 | orchestrator | } 2026-02-23 21:02:07.695228 | orchestrator | 2026-02-23 21:02:07.695232 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 21:02:07.695237 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 21:02:07.695242 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:02:07.695251 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:02:07.992254 | orchestrator | 2026-02-23 21:02:07.992347 | orchestrator | 2026-02-23 21:02:07.992363 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 21:02:07.992394 | orchestrator | Monday 23 February 2026 21:02:07 +0000 (0:00:00.387) 0:00:14.392 ******* 2026-02-23 21:02:07.992405 | orchestrator | =============================================================================== 2026-02-23 21:02:07.992416 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.93s 2026-02-23 21:02:07.992427 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-02-23 21:02:07.992437 | orchestrator | Aggregate test results step one ----------------------------------------- 1.23s 2026-02-23 21:02:07.992448 | orchestrator | Get container info ------------------------------------------------------ 1.13s 2026-02-23 21:02:07.992458 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2026-02-23 21:02:07.992468 | orchestrator | Get timestamp for report file ------------------------------------------- 0.74s 2026-02-23 21:02:07.992479 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2026-02-23 21:02:07.992509 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.47s 2026-02-23 21:02:07.992521 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.42s 2026-02-23 21:02:07.992532 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-02-23 21:02:07.992543 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-02-23 21:02:07.992554 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-02-23 21:02:07.992563 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-02-23 21:02:07.992574 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-02-23 21:02:07.992584 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-02-23 21:02:07.992595 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-02-23 21:02:07.992605 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-02-23 21:02:07.992615 | orchestrator | Aggregate test results step one ----------------------------------------- 0.27s 2026-02-23 21:02:07.992626 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-23 21:02:07.992637 | orchestrator | Prepare test data for container existance test -------------------------- 0.26s 2026-02-23 21:02:08.340791 | orchestrator | + osism validate ceph-osds 2026-02-23 21:02:29.546154 | orchestrator | 2026-02-23 21:02:29.546310 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-23 21:02:29.546323 | orchestrator | 2026-02-23 21:02:29.546331 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-23 21:02:29.546337 | orchestrator | Monday 23 February 2026 21:02:25 +0000 (0:00:00.456) 0:00:00.456 ******* 2026-02-23 21:02:29.546345 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:29.546352 | orchestrator | 2026-02-23 21:02:29.546359 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-23 21:02:29.546365 | orchestrator | Monday 23 February 2026 21:02:26 +0000 (0:00:00.834) 0:00:01.290 ******* 2026-02-23 21:02:29.546380 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:29.546386 | orchestrator | 2026-02-23 21:02:29.546392 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-23 21:02:29.546399 | orchestrator | Monday 23 February 2026 21:02:26 +0000 (0:00:00.503) 0:00:01.793 ******* 2026-02-23 21:02:29.546405 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:29.546411 | orchestrator | 2026-02-23 21:02:29.546418 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-23 21:02:29.546424 | orchestrator | Monday 23 February 2026 21:02:27 +0000 (0:00:00.722) 0:00:02.516 ******* 2026-02-23 21:02:29.546431 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:29.546438 | orchestrator | 2026-02-23 21:02:29.546445 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-23 21:02:29.546452 | orchestrator | Monday 23 February 2026 21:02:27 +0000 (0:00:00.117) 0:00:02.633 ******* 2026-02-23 21:02:29.546459 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:29.546465 | orchestrator | 2026-02-23 21:02:29.546471 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-23 21:02:29.546478 | orchestrator | Monday 23 February 2026 21:02:27 +0000 (0:00:00.133) 0:00:02.767 ******* 2026-02-23 21:02:29.546484 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:29.546490 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:29.546497 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:29.546503 | orchestrator | 2026-02-23 21:02:29.546509 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-23 21:02:29.546515 | orchestrator | Monday 23 February 2026 21:02:27 +0000 (0:00:00.304) 0:00:03.072 ******* 2026-02-23 21:02:29.546521 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:29.546528 | orchestrator | 2026-02-23 21:02:29.546564 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-23 21:02:29.546571 | orchestrator | Monday 23 February 2026 21:02:27 +0000 (0:00:00.146) 0:00:03.218 ******* 2026-02-23 21:02:29.546577 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:29.546584 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:29.546590 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:29.546596 | orchestrator | 2026-02-23 21:02:29.546602 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-23 21:02:29.546608 | orchestrator | Monday 23 February 2026 21:02:28 +0000 (0:00:00.345) 0:00:03.564 ******* 2026-02-23 21:02:29.546615 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:29.546621 | orchestrator | 2026-02-23 21:02:29.546650 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:02:29.546658 | orchestrator | Monday 23 February 2026 21:02:29 +0000 (0:00:00.753) 0:00:04.318 ******* 2026-02-23 21:02:29.546665 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:29.546672 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:29.546678 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:29.546684 | orchestrator | 2026-02-23 21:02:29.546691 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-23 21:02:29.546697 | orchestrator | Monday 23 February 2026 21:02:29 +0000 (0:00:00.299) 0:00:04.617 ******* 2026-02-23 21:02:29.546707 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cbe299c4caeb17ef00f43aa72da27107500e2f0a469ad5cef6db743ae04ce3cc', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-02-23 21:02:29.546718 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d7fb9fa2b0532b67e6cce48a8c901f5ee6c49622650a156f2895ca7c553a8ec', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-02-23 21:02:29.546727 | orchestrator | skipping: [testbed-node-3] => (item={'id': '088c0d858c07cc32a40a4bbb2d09218cb8b7749265a734bebae0295255363648', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-02-23 21:02:29.546736 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0eb91895ab76960796dd3c1c16de3ad1bc55850cd9cb97f524f12f09689be2e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-23 21:02:29.546753 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c2239ef6767a5066639a288fc5c81d49696f883408cca4c6de29ff5f66f0d12a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-02-23 21:02:29.546783 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bbe665d3d9762169b3bf9efd18af8b5e693abc6798effa91193e70ca5cab1b99', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-02-23 21:02:29.546790 | orchestrator | skipping: [testbed-node-3] => (item={'id': '35efe1d5d085abb26f2722b873858e831a9324bbad9adf61a0709a378a77a604', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-02-23 21:02:29.546797 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8422946cf85e41729a675866d430fce16983a758e0bacf22f56cdda6dde96b8d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-02-23 21:02:29.546803 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c169bb3ea8fde8ee74b5d0bc433bd360197ef88cf5ad1807752136b13c9ba6b5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-02-23 21:02:29.546818 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e79d80cef51256d95c0904e5433d51b17fe5bbf452c3b47b048d21ccf1a559a4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-02-23 21:02:29.546824 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd0a949dc524dc6bfed44c177dc082afa520c38bb87af46a66a7f7738e668f5ee', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-02-23 21:02:29.546831 | orchestrator | ok: [testbed-node-3] => (item={'id': 'cbb47aad2fd16387d7a15c2e8cf8dcced0b67aa42ba692c85d2f04568c288840', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-02-23 21:02:29.546837 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1243f39c21a38dbf4b0b39c1e78e87509be418697b9445fcbf963a1894cbf0f8', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2026-02-23 21:02:29.546842 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84f1c566e9ac8b74513070c60238cabd42aeb52d3b7d476303a7ff51f27eeac4', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-02-23 21:02:29.546852 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f2a7aef0454dcc8898b298d899096e69ed5ee0e2112096513c3855e54a476b6', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-02-23 21:02:29.546859 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5afb20e86fb6311cd0a6dc56a050c3a78dad9bef53107178b5b65122671ed88f', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-02-23 21:02:29.546865 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3a698efe3ab01bba6948211e2c130c4cd61a13d3d08ea0412489b705eeeb1d00', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-02-23 21:02:29.546871 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eff3e83d4347a1dbdb63dbe47fe7d1a22c400399e713973f686b06ebccdb036e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2026-02-23 21:02:29.546949 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5213bde47215353a43debab2bbc07f2ce35f1688b789d1151909d4c20523d10a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-02-23 21:02:29.546957 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4940f4fb109e19e2e7be5301b5ad42b884984774c8c00a66cb4efdf2dadabc89', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-02-23 21:02:29.546968 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd2b5f88c19da939254fb3a52cbf7535daf0e24ecaef9f7c0ed5f20d7bfb139d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-02-23 21:02:29.546984 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8455933f7d352fad219745711234984f9f96064aa5fa4ba9ddecb3fd6d9abe5e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-23 21:02:29.715279 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb549e49df064c02490e72d8fbcb581efef0d46b1a1ac87f6f2dcdb4a98f4fa7', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-02-23 21:02:29.715380 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd6562cd6b91137718e64dce790d9b642a6b2e6f3b703e2b4aacf7e6d3e39910', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-02-23 21:02:29.715413 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd46ba778472d59376fa2fff4f356d980b8811bcacb9d4dfcc3cf213d76959b30', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-02-23 21:02:29.715421 | orchestrator | skipping: [testbed-node-4] => (item={'id': '56880eec5828eb6e101143ab137987b21c9853aaaeafbbdfd8343f3be9d42c08', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-02-23 21:02:29.715428 | orchestrator | skipping: [testbed-node-4] => (item={'id': '43c0c79c883644a9b35485f8d0cd61de4167f9ff3ca35416970a143144a0faf8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-02-23 21:02:29.715436 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'da0bbda8fe97020a98d33b4d32495a7f8319268f52b56085c11093e8ec2bd686', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-02-23 21:02:29.715445 | orchestrator | ok: [testbed-node-4] => (item={'id': '2406eec7f8c327788e162bcc8fafcdb83413435debcbbe44edf572b348670a19', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-02-23 21:02:29.715453 | orchestrator | ok: [testbed-node-4] => (item={'id': 'dea67446bf2f2dd071e352731b46b04299e3b62070cc18d1900f945f89f5edfb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-02-23 21:02:29.715460 | orchestrator | skipping: [testbed-node-4] => (item={'id': '98b15e79669c43fbf9319cac116750f7228450efadab0790cfd07b3de4c5609c', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2026-02-23 21:02:29.715467 | orchestrator | skipping: [testbed-node-4] => (item={'id': '72fbcec08fe95e5b346148af617cd4035d7e5c0f112c6cff1f060dc75f8b4bd2', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-02-23 21:02:29.715474 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1670aa2e7aa45da48739d87185cfc0c49f5ae315e1a3c761c2dbbfe7d78d8b30', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-02-23 21:02:29.715482 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c17aa95af347fd7a9ebdc31333e220de6545149e5bcebc0917af8b57ebf68bf8', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-02-23 21:02:29.715488 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e375f3cd33cabace1a77e56f924342f2eed06361df241768886a2fc9a251230d', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-02-23 21:02:29.715495 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3cc4b56e5e7bf9f0a0ce0314c5592b6946aa3723ce05ac3a6857d33666cb0a9a', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2026-02-23 21:02:29.715502 | orchestrator | skipping: [testbed-node-5] => (item={'id': '61537c69ce995c40d1c188524e9f1a59e28695817adacfc3c384e9424c5c7709', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-02-23 21:02:29.715525 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cadb73560f8d6a917a5c0de3277a60e3d96ac78822e195b75d97ac02bc584591', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-02-23 21:02:29.715538 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a4dcefe010b1ecaa8101271cbf5e43d33ef5d4237bc6626dd449f946c0d488f6', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-02-23 21:02:29.715545 | orchestrator | skipping: [testbed-node-5] => (item={'id': '07bd55e2827a13c33694c80b1ee1b90dcbfe7e777f4f5dccc1065921d290cab4', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-23 21:02:29.715551 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85dc78623057ffa895ef91b191e2c0cdafa7827a8e037522168c9f04006841d2', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-02-23 21:02:29.715557 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0235e515cb03b06c86cb5f33992554fe2b5da2f6cfd5deb79aead92407591430', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-02-23 21:02:29.715563 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6cfaf9fc875c018796354e909f7b46a7106a7e9f733d499e1fd58d77d8ca0146', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-02-23 21:02:29.715570 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9226cab909374407369b503f678489c6d703c265677f426e414e9ae6a8511be7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-02-23 21:02:29.715576 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2e46c6ba618399d944808fb1e29c0ccbe2dcfb4c9e08dd2a652706a68cc99ad3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-02-23 21:02:29.715583 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'acf2dab52edf47d95782f124affd95b79e60b83db153ef24b4a21ed05830d11d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-02-23 21:02:29.715605 | orchestrator | ok: [testbed-node-5] => (item={'id': '8fb1653edbc430586f9cda0f62d83c107c3460d7e069aeded0bd3c2d37ff5b29', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-02-23 21:02:29.715612 | orchestrator | ok: [testbed-node-5] => (item={'id': '00a21c76ca000d35e79a8296d7121006d5d94d2c156a527c07f9269cffa814ef', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-02-23 21:02:29.715619 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd884ddfb1a09b83d6eb1f45a9f63f59ff99dd003b579892b881bd779afe71c8e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2026-02-23 21:02:29.715625 | orchestrator | skipping: [testbed-node-5] => (item={'id': '52cea3ac23d3730c4aed364a7da71b26dcef23c5f72b8d942deb6fa7d351e3c6', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-02-23 21:02:29.715632 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c637dcc62c360879d4b2087ace02f6385778aa970ced914547b977bcdf74ed0f', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-02-23 21:02:29.715641 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd60aa5b56753af3e9355dda8240889c80da4cfe1f4ad52d42b7119fabb262cae', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-02-23 21:02:29.715652 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1dc4d9087e54b5aeba95d090e04fb9e1269bbef98fcab4669ca786c15f2ce1c1', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-02-23 21:02:29.715664 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a56438ea4c3db87201780d161e0fb6fe2ef8a2975b72bd4ff651f8063125b38e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2026-02-23 21:02:43.370654 | orchestrator | 2026-02-23 21:02:43.370742 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-23 21:02:43.370749 | orchestrator | Monday 23 February 2026 21:02:29 +0000 (0:00:00.469) 0:00:05.086 ******* 2026-02-23 21:02:43.370754 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.370759 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.370763 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.370767 | orchestrator | 2026-02-23 21:02:43.370771 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-23 21:02:43.370776 | orchestrator | Monday 23 February 2026 21:02:30 +0000 (0:00:00.323) 0:00:05.409 ******* 2026-02-23 21:02:43.370780 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.370785 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.370789 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.370792 | orchestrator | 2026-02-23 21:02:43.370796 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-23 21:02:43.370800 | orchestrator | Monday 23 February 2026 21:02:30 +0000 (0:00:00.497) 0:00:05.907 ******* 2026-02-23 21:02:43.370804 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.370807 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.370811 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.370815 | orchestrator | 2026-02-23 21:02:43.370819 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:02:43.370822 | orchestrator | Monday 23 February 2026 21:02:30 +0000 (0:00:00.318) 0:00:06.226 ******* 2026-02-23 21:02:43.370826 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.370830 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.370834 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.370838 | orchestrator | 2026-02-23 21:02:43.370842 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-23 21:02:43.370846 | orchestrator | Monday 23 February 2026 21:02:31 +0000 (0:00:00.287) 0:00:06.513 ******* 2026-02-23 21:02:43.370887 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-23 21:02:43.370894 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-23 21:02:43.370898 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.370901 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-23 21:02:43.370905 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-23 21:02:43.370909 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.370913 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-23 21:02:43.370917 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-23 21:02:43.370921 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.370925 | orchestrator | 2026-02-23 21:02:43.370928 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-23 21:02:43.370932 | orchestrator | Monday 23 February 2026 21:02:31 +0000 (0:00:00.302) 0:00:06.815 ******* 2026-02-23 21:02:43.370936 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.370957 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.370961 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.370965 | orchestrator | 2026-02-23 21:02:43.370969 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-23 21:02:43.370973 | orchestrator | Monday 23 February 2026 21:02:32 +0000 (0:00:00.524) 0:00:07.339 ******* 2026-02-23 21:02:43.370977 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.370980 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.370984 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.370988 | orchestrator | 2026-02-23 21:02:43.370992 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-23 21:02:43.370995 | orchestrator | Monday 23 February 2026 21:02:32 +0000 (0:00:00.279) 0:00:07.618 ******* 2026-02-23 21:02:43.370999 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371003 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.371006 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.371010 | orchestrator | 2026-02-23 21:02:43.371014 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-23 21:02:43.371018 | orchestrator | Monday 23 February 2026 21:02:32 +0000 (0:00:00.285) 0:00:07.904 ******* 2026-02-23 21:02:43.371022 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371026 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.371030 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.371034 | orchestrator | 2026-02-23 21:02:43.371037 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-23 21:02:43.371041 | orchestrator | Monday 23 February 2026 21:02:32 +0000 (0:00:00.331) 0:00:08.235 ******* 2026-02-23 21:02:43.371045 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371049 | orchestrator | 2026-02-23 21:02:43.371053 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-23 21:02:43.371066 | orchestrator | Monday 23 February 2026 21:02:33 +0000 (0:00:00.635) 0:00:08.871 ******* 2026-02-23 21:02:43.371071 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371075 | orchestrator | 2026-02-23 21:02:43.371078 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-23 21:02:43.371082 | orchestrator | Monday 23 February 2026 21:02:33 +0000 (0:00:00.244) 0:00:09.115 ******* 2026-02-23 21:02:43.371086 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371089 | orchestrator | 2026-02-23 21:02:43.371093 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:43.371097 | orchestrator | Monday 23 February 2026 21:02:34 +0000 (0:00:00.241) 0:00:09.357 ******* 2026-02-23 21:02:43.371101 | orchestrator | 2026-02-23 21:02:43.371108 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:43.371114 | orchestrator | Monday 23 February 2026 21:02:34 +0000 (0:00:00.070) 0:00:09.427 ******* 2026-02-23 21:02:43.371120 | orchestrator | 2026-02-23 21:02:43.371126 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:43.371151 | orchestrator | Monday 23 February 2026 21:02:34 +0000 (0:00:00.067) 0:00:09.495 ******* 2026-02-23 21:02:43.371158 | orchestrator | 2026-02-23 21:02:43.371163 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-23 21:02:43.371169 | orchestrator | Monday 23 February 2026 21:02:34 +0000 (0:00:00.068) 0:00:09.563 ******* 2026-02-23 21:02:43.371175 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371181 | orchestrator | 2026-02-23 21:02:43.371187 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-23 21:02:43.371192 | orchestrator | Monday 23 February 2026 21:02:34 +0000 (0:00:00.236) 0:00:09.800 ******* 2026-02-23 21:02:43.371197 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371203 | orchestrator | 2026-02-23 21:02:43.371209 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:02:43.371215 | orchestrator | Monday 23 February 2026 21:02:34 +0000 (0:00:00.250) 0:00:10.050 ******* 2026-02-23 21:02:43.371221 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371234 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.371240 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.371247 | orchestrator | 2026-02-23 21:02:43.371253 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-23 21:02:43.371258 | orchestrator | Monday 23 February 2026 21:02:35 +0000 (0:00:00.312) 0:00:10.363 ******* 2026-02-23 21:02:43.371264 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371270 | orchestrator | 2026-02-23 21:02:43.371278 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-23 21:02:43.371285 | orchestrator | Monday 23 February 2026 21:02:35 +0000 (0:00:00.643) 0:00:11.006 ******* 2026-02-23 21:02:43.371290 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-23 21:02:43.371296 | orchestrator | 2026-02-23 21:02:43.371302 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-23 21:02:43.371309 | orchestrator | Monday 23 February 2026 21:02:37 +0000 (0:00:01.482) 0:00:12.489 ******* 2026-02-23 21:02:43.371314 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371320 | orchestrator | 2026-02-23 21:02:43.371328 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-23 21:02:43.371335 | orchestrator | Monday 23 February 2026 21:02:37 +0000 (0:00:00.129) 0:00:12.618 ******* 2026-02-23 21:02:43.371341 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371347 | orchestrator | 2026-02-23 21:02:43.371353 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-23 21:02:43.371359 | orchestrator | Monday 23 February 2026 21:02:37 +0000 (0:00:00.303) 0:00:12.921 ******* 2026-02-23 21:02:43.371365 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371371 | orchestrator | 2026-02-23 21:02:43.371377 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-23 21:02:43.371383 | orchestrator | Monday 23 February 2026 21:02:37 +0000 (0:00:00.137) 0:00:13.058 ******* 2026-02-23 21:02:43.371389 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371395 | orchestrator | 2026-02-23 21:02:43.371400 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:02:43.371407 | orchestrator | Monday 23 February 2026 21:02:37 +0000 (0:00:00.124) 0:00:13.183 ******* 2026-02-23 21:02:43.371414 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371420 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.371427 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.371433 | orchestrator | 2026-02-23 21:02:43.371440 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-23 21:02:43.371447 | orchestrator | Monday 23 February 2026 21:02:38 +0000 (0:00:00.285) 0:00:13.468 ******* 2026-02-23 21:02:43.371453 | orchestrator | changed: [testbed-node-3] 2026-02-23 21:02:43.371460 | orchestrator | changed: [testbed-node-4] 2026-02-23 21:02:43.371467 | orchestrator | changed: [testbed-node-5] 2026-02-23 21:02:43.371473 | orchestrator | 2026-02-23 21:02:43.371480 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-23 21:02:43.371487 | orchestrator | Monday 23 February 2026 21:02:40 +0000 (0:00:02.753) 0:00:16.222 ******* 2026-02-23 21:02:43.371493 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371501 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.371508 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.371514 | orchestrator | 2026-02-23 21:02:43.371520 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-23 21:02:43.371526 | orchestrator | Monday 23 February 2026 21:02:41 +0000 (0:00:00.513) 0:00:16.735 ******* 2026-02-23 21:02:43.371534 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371541 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.371547 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.371553 | orchestrator | 2026-02-23 21:02:43.371559 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-23 21:02:43.371567 | orchestrator | Monday 23 February 2026 21:02:41 +0000 (0:00:00.496) 0:00:17.231 ******* 2026-02-23 21:02:43.371573 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371585 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.371591 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.371597 | orchestrator | 2026-02-23 21:02:43.371603 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-23 21:02:43.371616 | orchestrator | Monday 23 February 2026 21:02:42 +0000 (0:00:00.305) 0:00:17.537 ******* 2026-02-23 21:02:43.371623 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:43.371628 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:43.371634 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:43.371641 | orchestrator | 2026-02-23 21:02:43.371647 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-23 21:02:43.371653 | orchestrator | Monday 23 February 2026 21:02:42 +0000 (0:00:00.513) 0:00:18.050 ******* 2026-02-23 21:02:43.371659 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371665 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.371671 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.371677 | orchestrator | 2026-02-23 21:02:43.371683 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-23 21:02:43.371689 | orchestrator | Monday 23 February 2026 21:02:43 +0000 (0:00:00.294) 0:00:18.344 ******* 2026-02-23 21:02:43.371695 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:43.371700 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:43.371706 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:43.371712 | orchestrator | 2026-02-23 21:02:43.371727 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-23 21:02:50.944176 | orchestrator | Monday 23 February 2026 21:02:43 +0000 (0:00:00.309) 0:00:18.654 ******* 2026-02-23 21:02:50.944258 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:50.944266 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:50.944270 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:50.944274 | orchestrator | 2026-02-23 21:02:50.944279 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-23 21:02:50.944284 | orchestrator | Monday 23 February 2026 21:02:43 +0000 (0:00:00.519) 0:00:19.173 ******* 2026-02-23 21:02:50.944288 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:50.944292 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:50.944296 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:50.944299 | orchestrator | 2026-02-23 21:02:50.944304 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-23 21:02:50.944308 | orchestrator | Monday 23 February 2026 21:02:44 +0000 (0:00:00.755) 0:00:19.929 ******* 2026-02-23 21:02:50.944312 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:50.944315 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:50.944319 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:50.944323 | orchestrator | 2026-02-23 21:02:50.944327 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-23 21:02:50.944331 | orchestrator | Monday 23 February 2026 21:02:44 +0000 (0:00:00.302) 0:00:20.232 ******* 2026-02-23 21:02:50.944335 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:50.944340 | orchestrator | skipping: [testbed-node-4] 2026-02-23 21:02:50.944345 | orchestrator | skipping: [testbed-node-5] 2026-02-23 21:02:50.944351 | orchestrator | 2026-02-23 21:02:50.944357 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-23 21:02:50.944362 | orchestrator | Monday 23 February 2026 21:02:45 +0000 (0:00:00.320) 0:00:20.553 ******* 2026-02-23 21:02:50.944371 | orchestrator | ok: [testbed-node-3] 2026-02-23 21:02:50.944378 | orchestrator | ok: [testbed-node-4] 2026-02-23 21:02:50.944385 | orchestrator | ok: [testbed-node-5] 2026-02-23 21:02:50.944391 | orchestrator | 2026-02-23 21:02:50.944397 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-23 21:02:50.944403 | orchestrator | Monday 23 February 2026 21:02:45 +0000 (0:00:00.335) 0:00:20.888 ******* 2026-02-23 21:02:50.944409 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:50.944414 | orchestrator | 2026-02-23 21:02:50.944420 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-23 21:02:50.944445 | orchestrator | Monday 23 February 2026 21:02:46 +0000 (0:00:00.657) 0:00:21.546 ******* 2026-02-23 21:02:50.944453 | orchestrator | skipping: [testbed-node-3] 2026-02-23 21:02:50.944459 | orchestrator | 2026-02-23 21:02:50.944466 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-23 21:02:50.944471 | orchestrator | Monday 23 February 2026 21:02:46 +0000 (0:00:00.309) 0:00:21.856 ******* 2026-02-23 21:02:50.944477 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:50.944483 | orchestrator | 2026-02-23 21:02:50.944489 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-23 21:02:50.944494 | orchestrator | Monday 23 February 2026 21:02:48 +0000 (0:00:01.576) 0:00:23.432 ******* 2026-02-23 21:02:50.944500 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:50.944506 | orchestrator | 2026-02-23 21:02:50.944511 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-23 21:02:50.944517 | orchestrator | Monday 23 February 2026 21:02:48 +0000 (0:00:00.267) 0:00:23.700 ******* 2026-02-23 21:02:50.944524 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:50.944528 | orchestrator | 2026-02-23 21:02:50.944532 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:50.944536 | orchestrator | Monday 23 February 2026 21:02:48 +0000 (0:00:00.235) 0:00:23.935 ******* 2026-02-23 21:02:50.944539 | orchestrator | 2026-02-23 21:02:50.944543 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:50.944547 | orchestrator | Monday 23 February 2026 21:02:48 +0000 (0:00:00.070) 0:00:24.005 ******* 2026-02-23 21:02:50.944551 | orchestrator | 2026-02-23 21:02:50.944555 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-23 21:02:50.944559 | orchestrator | Monday 23 February 2026 21:02:48 +0000 (0:00:00.084) 0:00:24.090 ******* 2026-02-23 21:02:50.944563 | orchestrator | 2026-02-23 21:02:50.944567 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-23 21:02:50.944571 | orchestrator | Monday 23 February 2026 21:02:48 +0000 (0:00:00.070) 0:00:24.160 ******* 2026-02-23 21:02:50.944575 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-23 21:02:50.944578 | orchestrator | 2026-02-23 21:02:50.944582 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-23 21:02:50.944586 | orchestrator | Monday 23 February 2026 21:02:50 +0000 (0:00:01.237) 0:00:25.398 ******* 2026-02-23 21:02:50.944590 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-23 21:02:50.944594 | orchestrator |  "msg": [ 2026-02-23 21:02:50.944598 | orchestrator |  "Validator run completed.", 2026-02-23 21:02:50.944602 | orchestrator |  "You can find the report file here:", 2026-02-23 21:02:50.944606 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-23T21:02:25+00:00-report.json", 2026-02-23 21:02:50.944611 | orchestrator |  "on the following host:", 2026-02-23 21:02:50.944617 | orchestrator |  "testbed-manager" 2026-02-23 21:02:50.944624 | orchestrator |  ] 2026-02-23 21:02:50.944631 | orchestrator | } 2026-02-23 21:02:50.944637 | orchestrator | 2026-02-23 21:02:50.944642 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 21:02:50.944650 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-23 21:02:50.944658 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 21:02:50.944700 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-23 21:02:50.944708 | orchestrator | 2026-02-23 21:02:50.944716 | orchestrator | 2026-02-23 21:02:50.944722 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 21:02:50.944782 | orchestrator | Monday 23 February 2026 21:02:50 +0000 (0:00:00.579) 0:00:25.978 ******* 2026-02-23 21:02:50.944791 | orchestrator | =============================================================================== 2026-02-23 21:02:50.944798 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.75s 2026-02-23 21:02:50.944806 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2026-02-23 21:02:50.944813 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.48s 2026-02-23 21:02:50.944820 | orchestrator | Write report file ------------------------------------------------------- 1.24s 2026-02-23 21:02:50.944827 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-02-23 21:02:50.944834 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2026-02-23 21:02:50.944906 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.75s 2026-02-23 21:02:50.944910 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2026-02-23 21:02:50.944915 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.66s 2026-02-23 21:02:50.944920 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.64s 2026-02-23 21:02:50.944925 | orchestrator | Aggregate test results step one ----------------------------------------- 0.64s 2026-02-23 21:02:50.944929 | orchestrator | Print report file information ------------------------------------------- 0.58s 2026-02-23 21:02:50.944934 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-02-23 21:02:50.944938 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-02-23 21:02:50.944942 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.51s 2026-02-23 21:02:50.944946 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.51s 2026-02-23 21:02:50.944951 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.50s 2026-02-23 21:02:50.944955 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2026-02-23 21:02:50.944959 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2026-02-23 21:02:50.944964 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.47s 2026-02-23 21:02:51.243817 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-23 21:02:51.249999 | orchestrator | + set -e 2026-02-23 21:02:51.250113 | orchestrator | + source /opt/manager-vars.sh 2026-02-23 21:02:51.250123 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-23 21:02:51.250130 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-23 21:02:51.250137 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-23 21:02:51.250141 | orchestrator | ++ CEPH_VERSION=reef 2026-02-23 21:02:51.250146 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-23 21:02:51.250150 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-23 21:02:51.250154 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 21:02:51.250789 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 21:02:51.250802 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-23 21:02:51.250807 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-23 21:02:51.250811 | orchestrator | ++ export ARA=false 2026-02-23 21:02:51.250816 | orchestrator | ++ ARA=false 2026-02-23 21:02:51.250821 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-23 21:02:51.250825 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-23 21:02:51.250830 | orchestrator | ++ export TEMPEST=false 2026-02-23 21:02:51.250834 | orchestrator | ++ TEMPEST=false 2026-02-23 21:02:51.250892 | orchestrator | ++ export IS_ZUUL=true 2026-02-23 21:02:51.250897 | orchestrator | ++ IS_ZUUL=true 2026-02-23 21:02:51.250901 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 21:02:51.250906 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.96 2026-02-23 21:02:51.250910 | orchestrator | ++ export EXTERNAL_API=false 2026-02-23 21:02:51.250915 | orchestrator | ++ EXTERNAL_API=false 2026-02-23 21:02:51.250919 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-23 21:02:51.250923 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-23 21:02:51.250928 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-23 21:02:51.250932 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-23 21:02:51.250956 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-23 21:02:51.250960 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-23 21:02:51.250964 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-23 21:02:51.250968 | orchestrator | + source /etc/os-release 2026-02-23 21:02:51.250972 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-23 21:02:51.250975 | orchestrator | ++ NAME=Ubuntu 2026-02-23 21:02:51.250979 | orchestrator | ++ VERSION_ID=24.04 2026-02-23 21:02:51.250983 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-23 21:02:51.250987 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-23 21:02:51.250990 | orchestrator | ++ ID=ubuntu 2026-02-23 21:02:51.250994 | orchestrator | ++ ID_LIKE=debian 2026-02-23 21:02:51.250998 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-23 21:02:51.251001 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-23 21:02:51.251005 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-23 21:02:51.251010 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-23 21:02:51.251015 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-23 21:02:51.251018 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-23 21:02:51.251022 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-23 21:02:51.251041 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-23 21:02:51.251053 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-23 21:02:51.275024 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-23 21:03:13.919865 | orchestrator | 2026-02-23 21:03:13.919942 | orchestrator | # Status of Elasticsearch 2026-02-23 21:03:13.919958 | orchestrator | 2026-02-23 21:03:13.919969 | orchestrator | + pushd /opt/configuration/contrib 2026-02-23 21:03:13.919981 | orchestrator | + echo 2026-02-23 21:03:13.919992 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-23 21:03:13.920004 | orchestrator | + echo 2026-02-23 21:03:13.920017 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-23 21:03:14.095788 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-23 21:03:14.096415 | orchestrator | 2026-02-23 21:03:14.096452 | orchestrator | # Status of MariaDB 2026-02-23 21:03:14.096460 | orchestrator | 2026-02-23 21:03:14.096465 | orchestrator | + echo 2026-02-23 21:03:14.096471 | orchestrator | + echo '# Status of MariaDB' 2026-02-23 21:03:14.096477 | orchestrator | + echo 2026-02-23 21:03:14.096543 | orchestrator | ++ semver latest 10.0.0-0 2026-02-23 21:03:14.149253 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 21:03:14.149304 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 21:03:14.149308 | orchestrator | + osism status database 2026-02-23 21:03:16.216354 | orchestrator | 2026-02-23 21:03:16 | ERROR  | Unable to get ansible vault password 2026-02-23 21:03:16.216449 | orchestrator | 2026-02-23 21:03:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:03:16.216474 | orchestrator | 2026-02-23 21:03:16 | ERROR  | Dropping encrypted entries 2026-02-23 21:03:16.250370 | orchestrator | 2026-02-23 21:03:16 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-02-23 21:03:16.260975 | orchestrator | 2026-02-23 21:03:16 | INFO  | Cluster Status: Primary 2026-02-23 21:03:16.261035 | orchestrator | 2026-02-23 21:03:16 | INFO  | Connected: ON 2026-02-23 21:03:16.261047 | orchestrator | 2026-02-23 21:03:16 | INFO  | Ready: ON 2026-02-23 21:03:16.261056 | orchestrator | 2026-02-23 21:03:16 | INFO  | Cluster Size: 3 2026-02-23 21:03:16.261064 | orchestrator | 2026-02-23 21:03:16 | INFO  | Local State: Synced 2026-02-23 21:03:16.261073 | orchestrator | 2026-02-23 21:03:16 | INFO  | Cluster State UUID: ade6d77c-10f7-11f1-bc1b-17c4cd0521ea 2026-02-23 21:03:16.261083 | orchestrator | 2026-02-23 21:03:16 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-02-23 21:03:16.261189 | orchestrator | 2026-02-23 21:03:16 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-02-23 21:03:16.261204 | orchestrator | 2026-02-23 21:03:16 | INFO  | Local Node UUID: de54e793-10f7-11f1-b097-ab0dd233d8a6 2026-02-23 21:03:16.261209 | orchestrator | 2026-02-23 21:03:16 | INFO  | Flow Control Paused: 0.00% 2026-02-23 21:03:16.261215 | orchestrator | 2026-02-23 21:03:16 | INFO  | Recv Queue Avg: 0.0126582 2026-02-23 21:03:16.261220 | orchestrator | 2026-02-23 21:03:16 | INFO  | Send Queue Avg: 0.000454821 2026-02-23 21:03:16.261225 | orchestrator | 2026-02-23 21:03:16 | INFO  | Transactions: 4321 local commits, 6540 replicated, 79 received 2026-02-23 21:03:16.261230 | orchestrator | 2026-02-23 21:03:16 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-02-23 21:03:16.261243 | orchestrator | 2026-02-23 21:03:16 | INFO  | MariaDB Uptime: 22 minutes, 57 seconds 2026-02-23 21:03:16.261248 | orchestrator | 2026-02-23 21:03:16 | INFO  | Threads: 124 connected, 1 running 2026-02-23 21:03:16.261253 | orchestrator | 2026-02-23 21:03:16 | INFO  | Queries: 136278 total, 0 slow 2026-02-23 21:03:16.261259 | orchestrator | 2026-02-23 21:03:16 | INFO  | Aborted Connects: 126 2026-02-23 21:03:16.261473 | orchestrator | 2026-02-23 21:03:16 | INFO  | MariaDB Galera Cluster validation PASSED 2026-02-23 21:03:16.564892 | orchestrator | 2026-02-23 21:03:16.564959 | orchestrator | # Status of Prometheus 2026-02-23 21:03:16.564968 | orchestrator | 2026-02-23 21:03:16.564974 | orchestrator | + echo 2026-02-23 21:03:16.564979 | orchestrator | + echo '# Status of Prometheus' 2026-02-23 21:03:16.564985 | orchestrator | + echo 2026-02-23 21:03:16.564991 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-23 21:03:16.618345 | orchestrator | Unauthorized 2026-02-23 21:03:16.621183 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-23 21:03:16.683542 | orchestrator | Unauthorized 2026-02-23 21:03:16.687702 | orchestrator | 2026-02-23 21:03:16.687776 | orchestrator | # Status of RabbitMQ 2026-02-23 21:03:16.687790 | orchestrator | 2026-02-23 21:03:16.687838 | orchestrator | + echo 2026-02-23 21:03:16.687848 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-23 21:03:16.687858 | orchestrator | + echo 2026-02-23 21:03:16.687868 | orchestrator | ++ semver latest 10.0.0-0 2026-02-23 21:03:16.728242 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-23 21:03:16.728318 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 21:03:16.728334 | orchestrator | + osism status messaging 2026-02-23 21:03:38.095023 | orchestrator | 2026-02-23 21:03:38 | ERROR  | Unable to get ansible vault password 2026-02-23 21:03:38.095133 | orchestrator | 2026-02-23 21:03:38 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:03:38.095143 | orchestrator | 2026-02-23 21:03:38 | ERROR  | Dropping encrypted entries 2026-02-23 21:03:38.136387 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-02-23 21:03:38.196815 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-02-23 21:03:38.196976 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-02-23 21:03:38.196990 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-02-23 21:03:38.196995 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Cluster Size: 3 2026-02-23 21:03:38.197010 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-02-23 21:03:38.197263 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-02-23 21:03:38.197666 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-02-23 21:03:38.197922 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Connections: 208, Channels: 207, Queues: 180 2026-02-23 21:03:38.198212 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Messages: 218 total, 218 ready, 0 unacked 2026-02-23 21:03:38.198588 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Message Rates: 6.8/s publish, 6.6/s deliver 2026-02-23 21:03:38.198915 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-02-23 21:03:38.199277 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-02-23 21:03:38.200453 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] File Descriptors: 120/1024 2026-02-23 21:03:38.200494 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-0] Sockets: 72/832 2026-02-23 21:03:38.200649 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-02-23 21:03:38.264981 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-02-23 21:03:38.265063 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-02-23 21:03:38.265126 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-02-23 21:03:38.265134 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Cluster Size: 3 2026-02-23 21:03:38.265143 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-02-23 21:03:38.265196 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-02-23 21:03:38.265203 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-02-23 21:03:38.265210 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Connections: 208, Channels: 207, Queues: 180 2026-02-23 21:03:38.265505 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Messages: 218 total, 218 ready, 0 unacked 2026-02-23 21:03:38.265615 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Message Rates: 6.8/s publish, 6.6/s deliver 2026-02-23 21:03:38.265802 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Disk Free: 58.5 GB (limit: 0.0 GB) 2026-02-23 21:03:38.265910 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-02-23 21:03:38.266179 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] File Descriptors: 116/1024 2026-02-23 21:03:38.266339 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-1] Sockets: 70/832 2026-02-23 21:03:38.266354 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-02-23 21:03:38.331674 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-02-23 21:03:38.331805 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-02-23 21:03:38.331816 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-02-23 21:03:38.331834 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Cluster Size: 3 2026-02-23 21:03:38.332051 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-02-23 21:03:38.332342 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-02-23 21:03:38.332478 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-02-23 21:03:38.332633 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Connections: 208, Channels: 207, Queues: 180 2026-02-23 21:03:38.333136 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Messages: 218 total, 218 ready, 0 unacked 2026-02-23 21:03:38.334516 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Message Rates: 6.8/s publish, 6.6/s deliver 2026-02-23 21:03:38.334558 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Disk Free: 58.5 GB (limit: 0.0 GB) 2026-02-23 21:03:38.334569 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-02-23 21:03:38.334576 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] File Descriptors: 112/1024 2026-02-23 21:03:38.334583 | orchestrator | 2026-02-23 21:03:38 | INFO  | [testbed-node-2] Sockets: 66/832 2026-02-23 21:03:38.334591 | orchestrator | 2026-02-23 21:03:38 | INFO  | RabbitMQ Cluster validation PASSED 2026-02-23 21:03:38.651145 | orchestrator | 2026-02-23 21:03:38.651236 | orchestrator | # Status of Redis 2026-02-23 21:03:38.651248 | orchestrator | 2026-02-23 21:03:38.651256 | orchestrator | + echo 2026-02-23 21:03:38.651263 | orchestrator | + echo '# Status of Redis' 2026-02-23 21:03:38.651269 | orchestrator | + echo 2026-02-23 21:03:38.651275 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-23 21:03:38.655847 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001713s;;;0.000000;10.000000 2026-02-23 21:03:38.656159 | orchestrator | + popd 2026-02-23 21:03:38.656278 | orchestrator | + echo 2026-02-23 21:03:38.656413 | orchestrator | 2026-02-23 21:03:38.656423 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-23 21:03:38.656461 | orchestrator | # Create backup of MariaDB database 2026-02-23 21:03:38.656572 | orchestrator | + echo 2026-02-23 21:03:38.656676 | orchestrator | 2026-02-23 21:03:38.656884 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-23 21:03:40.776392 | orchestrator | 2026-02-23 21:03:40 | INFO  | Prepare task for execution of mariadb_backup. 2026-02-23 21:03:40.837562 | orchestrator | 2026-02-23 21:03:40 | INFO  | Task f63fdb92-fdbe-45b4-88e9-c0bb725b36d2 (mariadb_backup) was prepared for execution. 2026-02-23 21:03:40.837620 | orchestrator | 2026-02-23 21:03:40 | INFO  | It takes a moment until task f63fdb92-fdbe-45b4-88e9-c0bb725b36d2 (mariadb_backup) has been started and output is visible here. 2026-02-23 21:04:34.987050 | orchestrator | 2026-02-23 21:04:34.987146 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-23 21:04:34.987160 | orchestrator | 2026-02-23 21:04:34.987169 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-23 21:04:34.987761 | orchestrator | Monday 23 February 2026 21:03:45 +0000 (0:00:00.244) 0:00:00.244 ******* 2026-02-23 21:04:34.987787 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:04:34.987797 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:04:34.987805 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:04:34.987813 | orchestrator | 2026-02-23 21:04:34.987820 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-23 21:04:34.987829 | orchestrator | Monday 23 February 2026 21:03:45 +0000 (0:00:00.378) 0:00:00.623 ******* 2026-02-23 21:04:34.987837 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-23 21:04:34.987846 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-23 21:04:34.987854 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-23 21:04:34.987875 | orchestrator | 2026-02-23 21:04:34.987880 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-23 21:04:34.987884 | orchestrator | 2026-02-23 21:04:34.987889 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-23 21:04:34.987897 | orchestrator | Monday 23 February 2026 21:03:46 +0000 (0:00:00.560) 0:00:01.183 ******* 2026-02-23 21:04:34.987905 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-23 21:04:34.987913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-23 21:04:34.987920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-23 21:04:34.987928 | orchestrator | 2026-02-23 21:04:34.987936 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-23 21:04:34.987944 | orchestrator | Monday 23 February 2026 21:03:46 +0000 (0:00:00.392) 0:00:01.576 ******* 2026-02-23 21:04:34.987952 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-23 21:04:34.987961 | orchestrator | 2026-02-23 21:04:34.987967 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-23 21:04:34.987974 | orchestrator | Monday 23 February 2026 21:03:46 +0000 (0:00:00.562) 0:00:02.139 ******* 2026-02-23 21:04:34.987983 | orchestrator | ok: [testbed-node-1] 2026-02-23 21:04:34.987994 | orchestrator | ok: [testbed-node-0] 2026-02-23 21:04:34.988002 | orchestrator | ok: [testbed-node-2] 2026-02-23 21:04:34.988046 | orchestrator | 2026-02-23 21:04:34.988052 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-23 21:04:34.988057 | orchestrator | Monday 23 February 2026 21:03:50 +0000 (0:00:03.320) 0:00:05.459 ******* 2026-02-23 21:04:34.988061 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:04:34.988066 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:04:34.988071 | orchestrator | changed: [testbed-node-0] 2026-02-23 21:04:34.988076 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-23 21:04:34.988080 | orchestrator | 2026-02-23 21:04:34.988085 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-23 21:04:34.988089 | orchestrator | skipping: no hosts matched 2026-02-23 21:04:34.988094 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-23 21:04:34.988099 | orchestrator | 2026-02-23 21:04:34.988115 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-23 21:04:34.988123 | orchestrator | skipping: no hosts matched 2026-02-23 21:04:34.988137 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-23 21:04:34.988144 | orchestrator | mariadb_bootstrap_restart 2026-02-23 21:04:34.988152 | orchestrator | 2026-02-23 21:04:34.988159 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-23 21:04:34.988166 | orchestrator | skipping: no hosts matched 2026-02-23 21:04:34.988173 | orchestrator | 2026-02-23 21:04:34.988180 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-23 21:04:34.988189 | orchestrator | 2026-02-23 21:04:34.988196 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-23 21:04:34.988216 | orchestrator | Monday 23 February 2026 21:04:33 +0000 (0:00:43.658) 0:00:49.117 ******* 2026-02-23 21:04:34.988221 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:04:34.988226 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:04:34.988231 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:04:34.988235 | orchestrator | 2026-02-23 21:04:34.988240 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-23 21:04:34.988244 | orchestrator | Monday 23 February 2026 21:04:34 +0000 (0:00:00.301) 0:00:49.419 ******* 2026-02-23 21:04:34.988249 | orchestrator | skipping: [testbed-node-0] 2026-02-23 21:04:34.988254 | orchestrator | skipping: [testbed-node-1] 2026-02-23 21:04:34.988259 | orchestrator | skipping: [testbed-node-2] 2026-02-23 21:04:34.988263 | orchestrator | 2026-02-23 21:04:34.988268 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 21:04:34.988279 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-23 21:04:34.988285 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 21:04:34.988290 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-23 21:04:34.988294 | orchestrator | 2026-02-23 21:04:34.988299 | orchestrator | 2026-02-23 21:04:34.988303 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 21:04:34.988308 | orchestrator | Monday 23 February 2026 21:04:34 +0000 (0:00:00.395) 0:00:49.815 ******* 2026-02-23 21:04:34.988312 | orchestrator | =============================================================================== 2026-02-23 21:04:34.988317 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 43.66s 2026-02-23 21:04:34.988336 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.32s 2026-02-23 21:04:34.988342 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-02-23 21:04:34.988346 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-02-23 21:04:34.988351 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2026-02-23 21:04:34.988355 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2026-02-23 21:04:34.988360 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-02-23 21:04:34.988364 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-02-23 21:04:35.333271 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-23 21:04:35.338637 | orchestrator | + set -e 2026-02-23 21:04:35.338804 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-23 21:04:35.338816 | orchestrator | ++ export INTERACTIVE=false 2026-02-23 21:04:35.338828 | orchestrator | ++ INTERACTIVE=false 2026-02-23 21:04:35.338833 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-23 21:04:35.338836 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-23 21:04:35.338841 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-23 21:04:35.339747 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-23 21:04:35.346401 | orchestrator | 2026-02-23 21:04:35.346466 | orchestrator | # OpenStack endpoints 2026-02-23 21:04:35.346472 | orchestrator | 2026-02-23 21:04:35.346477 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-23 21:04:35.346481 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-23 21:04:35.346485 | orchestrator | + export OS_CLOUD=admin 2026-02-23 21:04:35.346489 | orchestrator | + OS_CLOUD=admin 2026-02-23 21:04:35.346493 | orchestrator | + echo 2026-02-23 21:04:35.346497 | orchestrator | + echo '# OpenStack endpoints' 2026-02-23 21:04:35.346501 | orchestrator | + echo 2026-02-23 21:04:35.346505 | orchestrator | + openstack endpoint list 2026-02-23 21:04:38.636462 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-23 21:04:38.636581 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-23 21:04:38.636597 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-23 21:04:38.636606 | orchestrator | | 075788d2d8b24acaad2dbbd02c971d78 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-23 21:04:38.636614 | orchestrator | | 21f932773b004916942f21f6839880f7 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-23 21:04:38.636648 | orchestrator | | 2c3ce9df72c7479282001352d85a2d2e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-23 21:04:38.636733 | orchestrator | | 2f6b2db1489c486d800a3b3b3cb00b46 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-23 21:04:38.636744 | orchestrator | | 4421fff631964709a74e37768a7fda64 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-23 21:04:38.636753 | orchestrator | | 465d9d5d210f426bb29cb1b1a7eb03b1 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-23 21:04:38.636762 | orchestrator | | 495c8f1666df458aac5132d5420a74b6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-23 21:04:38.636771 | orchestrator | | 58b7607bf0604369adc0fbbded8160a1 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-23 21:04:38.636779 | orchestrator | | 8564ed5ebeb34678a6459d9cd9519f71 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-23 21:04:38.636788 | orchestrator | | 8a7fb84873ce47f59052b0db2c7fba13 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-23 21:04:38.636796 | orchestrator | | 91104a48b5c540fe81a7913a804fe7c6 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-23 21:04:38.636805 | orchestrator | | 9ce881efe4cf4f2193560442ed3bb104 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-23 21:04:38.636810 | orchestrator | | b49a27d3e28648ceaf7538751ae08f74 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-23 21:04:38.636815 | orchestrator | | c2f6bfc2f270445da6160f37dc57724c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-23 21:04:38.636819 | orchestrator | | c7cc5620eb24406a832b6dd62f64cb6e | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-23 21:04:38.636824 | orchestrator | | d37ef67dcfe743b3b1058e7b5ce58c2e | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-23 21:04:38.636832 | orchestrator | | db75c43caa4b44a1b7276d6cad5169f2 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-23 21:04:38.636840 | orchestrator | | de6f2a269fd9442ca841bc2b443d1a2b | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-23 21:04:38.636849 | orchestrator | | e0d9ceb886844848b60eda2be137a8c2 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-23 21:04:38.636862 | orchestrator | | e9c606b1e31c4a59b92375558246d6e3 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-23 21:04:38.636888 | orchestrator | | eb2d753a7f0947bb91c1ad4ce01b08a6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-23 21:04:38.636897 | orchestrator | | f14663fe47c340f89893af2c4957788a | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-23 21:04:38.636904 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-23 21:04:38.859823 | orchestrator | 2026-02-23 21:04:38.859908 | orchestrator | # Cinder 2026-02-23 21:04:38.859919 | orchestrator | 2026-02-23 21:04:38.859925 | orchestrator | + echo 2026-02-23 21:04:38.859932 | orchestrator | + echo '# Cinder' 2026-02-23 21:04:38.859939 | orchestrator | + echo 2026-02-23 21:04:38.859945 | orchestrator | + openstack volume service list 2026-02-23 21:04:42.550946 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-23 21:04:42.551007 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-23 21:04:42.551015 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-23 21:04:42.551031 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-23T21:04:33.000000 | 2026-02-23 21:04:42.551037 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-23T21:04:34.000000 | 2026-02-23 21:04:42.551042 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-23T21:04:33.000000 | 2026-02-23 21:04:42.551047 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-23T21:04:33.000000 | 2026-02-23 21:04:42.551053 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-23T21:04:39.000000 | 2026-02-23 21:04:42.551058 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-23T21:04:39.000000 | 2026-02-23 21:04:42.551063 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-23T21:04:41.000000 | 2026-02-23 21:04:42.551068 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-23T21:04:32.000000 | 2026-02-23 21:04:42.551073 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-23T21:04:33.000000 | 2026-02-23 21:04:42.551079 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-23 21:04:42.827538 | orchestrator | 2026-02-23 21:04:42.827633 | orchestrator | # Neutron 2026-02-23 21:04:42.827726 | orchestrator | 2026-02-23 21:04:42.827749 | orchestrator | + echo 2026-02-23 21:04:42.827765 | orchestrator | + echo '# Neutron' 2026-02-23 21:04:42.827783 | orchestrator | + echo 2026-02-23 21:04:42.827801 | orchestrator | + openstack network agent list 2026-02-23 21:04:45.643716 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-23 21:04:45.643831 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-23 21:04:45.643843 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-23 21:04:45.643850 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-23 21:04:45.643857 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-23 21:04:45.643864 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-23 21:04:45.643871 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-23 21:04:45.643878 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-23 21:04:45.643884 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-23 21:04:45.643890 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-23 21:04:45.643923 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-23 21:04:45.643929 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-23 21:04:45.643936 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-23 21:04:45.900432 | orchestrator | + openstack network service provider list 2026-02-23 21:04:48.466780 | orchestrator | +---------------+----------+---------+ 2026-02-23 21:04:48.466880 | orchestrator | | Service Type | Name | Default | 2026-02-23 21:04:48.466889 | orchestrator | +---------------+----------+---------+ 2026-02-23 21:04:48.466897 | orchestrator | | VPN | openswan | True | 2026-02-23 21:04:48.466903 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-23 21:04:48.466909 | orchestrator | +---------------+----------+---------+ 2026-02-23 21:04:48.724877 | orchestrator | 2026-02-23 21:04:48.724949 | orchestrator | # Nova 2026-02-23 21:04:48.724955 | orchestrator | 2026-02-23 21:04:48.724959 | orchestrator | + echo 2026-02-23 21:04:48.724963 | orchestrator | + echo '# Nova' 2026-02-23 21:04:48.724968 | orchestrator | + echo 2026-02-23 21:04:48.724972 | orchestrator | + openstack compute service list 2026-02-23 21:04:51.392205 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-23 21:04:51.392276 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-23 21:04:51.392285 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-23 21:04:51.392291 | orchestrator | | aadf935d-7c29-4362-be34-bf00c645cda1 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-23T21:04:50.000000 | 2026-02-23 21:04:51.392298 | orchestrator | | 4da8675c-e62d-42b4-86b0-c68629f88fa8 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-23T21:04:50.000000 | 2026-02-23 21:04:51.392336 | orchestrator | | f2109275-ec97-4bdc-9c23-1bc203d9353c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-23T21:04:51.000000 | 2026-02-23 21:04:51.392344 | orchestrator | | 4df63402-6e10-466d-8b05-16dd9526d60a | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-23T21:04:45.000000 | 2026-02-23 21:04:51.392351 | orchestrator | | 1d79dd2d-9f3a-461c-92e8-1ab65d93773a | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-23T21:04:46.000000 | 2026-02-23 21:04:51.392357 | orchestrator | | 01fc93b7-b612-480c-8011-b3aa00341b35 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-23T21:04:47.000000 | 2026-02-23 21:04:51.392363 | orchestrator | | 264d5121-d88b-4dd2-bbb8-384feba16359 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-23T21:04:49.000000 | 2026-02-23 21:04:51.392370 | orchestrator | | a4db6f3f-e89e-4e1c-aec8-278350074a06 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-23T21:04:50.000000 | 2026-02-23 21:04:51.392376 | orchestrator | | 85e85f6b-0b3e-480c-8027-2334aaa311df | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-23T21:04:50.000000 | 2026-02-23 21:04:51.392382 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-23 21:04:51.658273 | orchestrator | + openstack hypervisor list 2026-02-23 21:04:54.225032 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-23 21:04:54.225093 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-23 21:04:54.225103 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-23 21:04:54.225109 | orchestrator | | 08d183b2-b9ed-4d9e-801c-8ad171184c23 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-23 21:04:54.225131 | orchestrator | | 8ada871f-ccdf-44f1-b4b2-3b770ab9f7b9 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-23 21:04:54.225138 | orchestrator | | ba84e32c-1a94-4ebc-aa1f-617d456f2547 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-23 21:04:54.225145 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-23 21:04:54.509776 | orchestrator | 2026-02-23 21:04:54.509820 | orchestrator | # Run OpenStack test play 2026-02-23 21:04:54.509825 | orchestrator | 2026-02-23 21:04:54.509829 | orchestrator | + echo 2026-02-23 21:04:54.509832 | orchestrator | + echo '# Run OpenStack test play' 2026-02-23 21:04:54.509836 | orchestrator | + echo 2026-02-23 21:04:54.509839 | orchestrator | + osism apply --environment openstack test 2026-02-23 21:04:56.738100 | orchestrator | 2026-02-23 21:04:56 | INFO  | Trying to run play test in environment openstack 2026-02-23 21:04:56.748154 | orchestrator | 2026-02-23 21:04:56 | INFO  | Prepare task for execution of test. 2026-02-23 21:04:56.810613 | orchestrator | 2026-02-23 21:04:56 | INFO  | Task 7ddd83e4-c92e-4a7b-b9d4-ccb0a4a72ac5 (test) was prepared for execution. 2026-02-23 21:04:56.810715 | orchestrator | 2026-02-23 21:04:56 | INFO  | It takes a moment until task 7ddd83e4-c92e-4a7b-b9d4-ccb0a4a72ac5 (test) has been started and output is visible here. 2026-02-23 21:07:28.940928 | orchestrator | 2026-02-23 21:07:28.941006 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-23 21:07:28.941015 | orchestrator | 2026-02-23 21:07:28.941020 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-23 21:07:28.941026 | orchestrator | Monday 23 February 2026 21:05:01 +0000 (0:00:00.078) 0:00:00.078 ******* 2026-02-23 21:07:28.941032 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941038 | orchestrator | 2026-02-23 21:07:28.941043 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-23 21:07:28.941048 | orchestrator | Monday 23 February 2026 21:05:04 +0000 (0:00:03.743) 0:00:03.822 ******* 2026-02-23 21:07:28.941053 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941058 | orchestrator | 2026-02-23 21:07:28.941064 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-23 21:07:28.941069 | orchestrator | Monday 23 February 2026 21:05:08 +0000 (0:00:04.110) 0:00:07.932 ******* 2026-02-23 21:07:28.941074 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941079 | orchestrator | 2026-02-23 21:07:28.941084 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-23 21:07:28.941089 | orchestrator | Monday 23 February 2026 21:05:15 +0000 (0:00:06.531) 0:00:14.464 ******* 2026-02-23 21:07:28.941094 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941099 | orchestrator | 2026-02-23 21:07:28.941104 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-23 21:07:28.941109 | orchestrator | Monday 23 February 2026 21:05:19 +0000 (0:00:04.048) 0:00:18.513 ******* 2026-02-23 21:07:28.941114 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941119 | orchestrator | 2026-02-23 21:07:28.941124 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-23 21:07:28.941129 | orchestrator | Monday 23 February 2026 21:05:23 +0000 (0:00:04.082) 0:00:22.595 ******* 2026-02-23 21:07:28.941135 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-23 21:07:28.941140 | orchestrator | changed: [localhost] => (item=member) 2026-02-23 21:07:28.941146 | orchestrator | changed: [localhost] => (item=creator) 2026-02-23 21:07:28.941151 | orchestrator | 2026-02-23 21:07:28.941156 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-23 21:07:28.941162 | orchestrator | Monday 23 February 2026 21:05:34 +0000 (0:00:11.236) 0:00:33.832 ******* 2026-02-23 21:07:28.941167 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941172 | orchestrator | 2026-02-23 21:07:28.941177 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-23 21:07:28.941183 | orchestrator | Monday 23 February 2026 21:05:38 +0000 (0:00:03.934) 0:00:37.766 ******* 2026-02-23 21:07:28.941202 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941208 | orchestrator | 2026-02-23 21:07:28.941213 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-23 21:07:28.941218 | orchestrator | Monday 23 February 2026 21:05:43 +0000 (0:00:04.759) 0:00:42.526 ******* 2026-02-23 21:07:28.941223 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941228 | orchestrator | 2026-02-23 21:07:28.941233 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-23 21:07:28.941238 | orchestrator | Monday 23 February 2026 21:05:47 +0000 (0:00:04.308) 0:00:46.835 ******* 2026-02-23 21:07:28.941243 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941248 | orchestrator | 2026-02-23 21:07:28.941253 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-23 21:07:28.941258 | orchestrator | Monday 23 February 2026 21:05:51 +0000 (0:00:03.696) 0:00:50.531 ******* 2026-02-23 21:07:28.941263 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941268 | orchestrator | 2026-02-23 21:07:28.941273 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-23 21:07:28.941278 | orchestrator | Monday 23 February 2026 21:05:55 +0000 (0:00:04.252) 0:00:54.784 ******* 2026-02-23 21:07:28.941283 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941288 | orchestrator | 2026-02-23 21:07:28.941293 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-23 21:07:28.941298 | orchestrator | Monday 23 February 2026 21:05:59 +0000 (0:00:03.767) 0:00:58.551 ******* 2026-02-23 21:07:28.941303 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941307 | orchestrator | 2026-02-23 21:07:28.941313 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-23 21:07:28.941318 | orchestrator | Monday 23 February 2026 21:06:04 +0000 (0:00:04.635) 0:01:03.187 ******* 2026-02-23 21:07:28.941323 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941327 | orchestrator | 2026-02-23 21:07:28.941332 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-23 21:07:28.941338 | orchestrator | Monday 23 February 2026 21:06:09 +0000 (0:00:05.036) 0:01:08.224 ******* 2026-02-23 21:07:28.941343 | orchestrator | changed: [localhost] 2026-02-23 21:07:28.941348 | orchestrator | 2026-02-23 21:07:28.941353 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-23 21:07:28.941358 | orchestrator | 2026-02-23 21:07:28.941363 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-23 21:07:28.941368 | orchestrator | Monday 23 February 2026 21:06:20 +0000 (0:00:11.310) 0:01:19.534 ******* 2026-02-23 21:07:28.941373 | orchestrator | ok: [localhost] 2026-02-23 21:07:28.941378 | orchestrator | 2026-02-23 21:07:28.941383 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-23 21:07:28.941388 | orchestrator | Monday 23 February 2026 21:06:23 +0000 (0:00:03.157) 0:01:22.691 ******* 2026-02-23 21:07:28.941393 | orchestrator | skipping: [localhost] 2026-02-23 21:07:28.941398 | orchestrator | 2026-02-23 21:07:28.941444 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-23 21:07:28.941451 | orchestrator | Monday 23 February 2026 21:06:23 +0000 (0:00:00.051) 0:01:22.743 ******* 2026-02-23 21:07:28.941456 | orchestrator | skipping: [localhost] 2026-02-23 21:07:28.941461 | orchestrator | 2026-02-23 21:07:28.941466 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-23 21:07:28.941471 | orchestrator | Monday 23 February 2026 21:06:23 +0000 (0:00:00.047) 0:01:22.791 ******* 2026-02-23 21:07:28.941476 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-23 21:07:28.941482 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-23 21:07:28.941496 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-23 21:07:28.941502 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-23 21:07:28.941520 | orchestrator | skipping: [localhost] => (item=test)  2026-02-23 21:07:28.941526 | orchestrator | skipping: [localhost] 2026-02-23 21:07:28.941532 | orchestrator | 2026-02-23 21:07:28.941542 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-23 21:07:28.941548 | orchestrator | Monday 23 February 2026 21:06:23 +0000 (0:00:00.150) 0:01:22.941 ******* 2026-02-23 21:07:28.941554 | orchestrator | skipping: [localhost] 2026-02-23 21:07:28.941559 | orchestrator | 2026-02-23 21:07:28.941565 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-23 21:07:28.941571 | orchestrator | Monday 23 February 2026 21:06:24 +0000 (0:00:00.140) 0:01:23.082 ******* 2026-02-23 21:07:28.941576 | orchestrator | changed: [localhost] => (item=test) 2026-02-23 21:07:28.941582 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-23 21:07:28.941588 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-23 21:07:28.941593 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-23 21:07:28.941599 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-23 21:07:28.941604 | orchestrator | 2026-02-23 21:07:28.941610 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-23 21:07:28.941616 | orchestrator | Monday 23 February 2026 21:06:28 +0000 (0:00:04.488) 0:01:27.572 ******* 2026-02-23 21:07:28.941622 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-23 21:07:28.941628 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-23 21:07:28.941634 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-23 21:07:28.941639 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-23 21:07:28.941646 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j629161266125.2584', 'results_file': '/ansible/.ansible_async/j629161266125.2584', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941657 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j681146685873.2609', 'results_file': '/ansible/.ansible_async/j681146685873.2609', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941663 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j668501604904.2642', 'results_file': '/ansible/.ansible_async/j668501604904.2642', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941669 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j917865882666.2667', 'results_file': '/ansible/.ansible_async/j917865882666.2667', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941675 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j166439791463.2692', 'results_file': '/ansible/.ansible_async/j166439791463.2692', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941681 | orchestrator | 2026-02-23 21:07:28.941686 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-23 21:07:28.941692 | orchestrator | Monday 23 February 2026 21:07:15 +0000 (0:00:46.856) 0:02:14.429 ******* 2026-02-23 21:07:28.941697 | orchestrator | changed: [localhost] => (item=test) 2026-02-23 21:07:28.941702 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-23 21:07:28.941707 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-23 21:07:28.941712 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-23 21:07:28.941717 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-23 21:07:28.941722 | orchestrator | 2026-02-23 21:07:28.941733 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-23 21:07:28.941738 | orchestrator | Monday 23 February 2026 21:07:19 +0000 (0:00:04.410) 0:02:18.839 ******* 2026-02-23 21:07:28.941743 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-23 21:07:28.941752 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j848747442172.2789', 'results_file': '/ansible/.ansible_async/j848747442172.2789', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941758 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j808014199695.2814', 'results_file': '/ansible/.ansible_async/j808014199695.2814', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941763 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j780266364651.2839', 'results_file': '/ansible/.ansible_async/j780266364651.2839', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-23 21:07:28.941771 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j954694984313.2864', 'results_file': '/ansible/.ansible_async/j954694984313.2864', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236768 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j53599134963.2889', 'results_file': '/ansible/.ansible_async/j53599134963.2889', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236822 | orchestrator | 2026-02-23 21:08:09.236828 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-23 21:08:09.236833 | orchestrator | Monday 23 February 2026 21:07:29 +0000 (0:00:09.841) 0:02:28.681 ******* 2026-02-23 21:08:09.236837 | orchestrator | changed: [localhost] => (item=test) 2026-02-23 21:08:09.236842 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-23 21:08:09.236846 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-23 21:08:09.236850 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-23 21:08:09.236854 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-23 21:08:09.236858 | orchestrator | 2026-02-23 21:08:09.236862 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-23 21:08:09.236865 | orchestrator | Monday 23 February 2026 21:07:34 +0000 (0:00:04.332) 0:02:33.013 ******* 2026-02-23 21:08:09.236869 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-23 21:08:09.236874 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j864372794958.2965', 'results_file': '/ansible/.ansible_async/j864372794958.2965', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236878 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j327250894688.2990', 'results_file': '/ansible/.ansible_async/j327250894688.2990', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236889 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j75287325885.3016', 'results_file': '/ansible/.ansible_async/j75287325885.3016', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236893 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j707423556622.3042', 'results_file': '/ansible/.ansible_async/j707423556622.3042', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236897 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j601143066272.3068', 'results_file': '/ansible/.ansible_async/j601143066272.3068', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-23 21:08:09.236901 | orchestrator | 2026-02-23 21:08:09.236905 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-23 21:08:09.236909 | orchestrator | Monday 23 February 2026 21:07:44 +0000 (0:00:10.484) 0:02:43.498 ******* 2026-02-23 21:08:09.236913 | orchestrator | changed: [localhost] 2026-02-23 21:08:09.236926 | orchestrator | 2026-02-23 21:08:09.236930 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-23 21:08:09.236934 | orchestrator | Monday 23 February 2026 21:07:50 +0000 (0:00:06.269) 0:02:49.767 ******* 2026-02-23 21:08:09.236938 | orchestrator | changed: [localhost] 2026-02-23 21:08:09.236941 | orchestrator | 2026-02-23 21:08:09.236945 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-23 21:08:09.236949 | orchestrator | Monday 23 February 2026 21:08:04 +0000 (0:00:13.278) 0:03:03.046 ******* 2026-02-23 21:08:09.236953 | orchestrator | ok: [localhost] 2026-02-23 21:08:09.236956 | orchestrator | 2026-02-23 21:08:09.236960 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-23 21:08:09.236964 | orchestrator | Monday 23 February 2026 21:08:08 +0000 (0:00:04.910) 0:03:07.956 ******* 2026-02-23 21:08:09.236967 | orchestrator | ok: [localhost] => { 2026-02-23 21:08:09.236971 | orchestrator |  "msg": "192.168.112.192" 2026-02-23 21:08:09.236975 | orchestrator | } 2026-02-23 21:08:09.236979 | orchestrator | 2026-02-23 21:08:09.236983 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-23 21:08:09.236987 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-23 21:08:09.236991 | orchestrator | 2026-02-23 21:08:09.236995 | orchestrator | 2026-02-23 21:08:09.236999 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-23 21:08:09.237002 | orchestrator | Monday 23 February 2026 21:08:09 +0000 (0:00:00.041) 0:03:07.997 ******* 2026-02-23 21:08:09.237006 | orchestrator | =============================================================================== 2026-02-23 21:08:09.237010 | orchestrator | Wait for instance creation to complete --------------------------------- 46.86s 2026-02-23 21:08:09.237013 | orchestrator | Attach test volume ----------------------------------------------------- 13.28s 2026-02-23 21:08:09.237017 | orchestrator | Create test router ----------------------------------------------------- 11.31s 2026-02-23 21:08:09.237021 | orchestrator | Add member roles to user test ------------------------------------------ 11.24s 2026-02-23 21:08:09.237025 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.48s 2026-02-23 21:08:09.237028 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.84s 2026-02-23 21:08:09.237032 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.53s 2026-02-23 21:08:09.237043 | orchestrator | Create test volume ------------------------------------------------------ 6.27s 2026-02-23 21:08:09.237047 | orchestrator | Create test subnet ------------------------------------------------------ 5.04s 2026-02-23 21:08:09.237050 | orchestrator | Create floating ip address ---------------------------------------------- 4.91s 2026-02-23 21:08:09.237054 | orchestrator | Create ssh security group ----------------------------------------------- 4.76s 2026-02-23 21:08:09.237058 | orchestrator | Create test network ----------------------------------------------------- 4.64s 2026-02-23 21:08:09.237062 | orchestrator | Create test instances --------------------------------------------------- 4.49s 2026-02-23 21:08:09.237065 | orchestrator | Add metadata to instances ----------------------------------------------- 4.41s 2026-02-23 21:08:09.237069 | orchestrator | Add tag to instances ---------------------------------------------------- 4.33s 2026-02-23 21:08:09.237073 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.31s 2026-02-23 21:08:09.237076 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.25s 2026-02-23 21:08:09.237080 | orchestrator | Create test-admin user -------------------------------------------------- 4.11s 2026-02-23 21:08:09.237084 | orchestrator | Create test user -------------------------------------------------------- 4.08s 2026-02-23 21:08:09.237088 | orchestrator | Create test project ----------------------------------------------------- 4.05s 2026-02-23 21:08:09.539986 | orchestrator | + server_list 2026-02-23 21:08:09.540092 | orchestrator | + openstack --os-cloud test server list 2026-02-23 21:08:13.051856 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-23 21:08:13.051970 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-23 21:08:13.051979 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-23 21:08:13.051994 | orchestrator | | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | test-4 | ACTIVE | test=192.168.112.166, 192.168.200.3 | N/A (booted from volume) | SCS-1L-1 | 2026-02-23 21:08:13.051999 | orchestrator | | 95eb0896-2059-42bc-afda-7157fd138a76 | test-3 | ACTIVE | test=192.168.112.100, 192.168.200.56 | N/A (booted from volume) | SCS-1L-1 | 2026-02-23 21:08:13.052004 | orchestrator | | 9ea23385-fc20-43a9-adbc-02b35774482d | test-2 | ACTIVE | test=192.168.112.172, 192.168.200.207 | N/A (booted from volume) | SCS-1L-1 | 2026-02-23 21:08:13.052007 | orchestrator | | 2b54588c-a537-47da-a043-7a20a09aefd8 | test-1 | ACTIVE | test=192.168.112.161, 192.168.200.112 | N/A (booted from volume) | SCS-1L-1 | 2026-02-23 21:08:13.052011 | orchestrator | | 8706d052-6ae2-4461-a5fe-5d9008721ccb | test | ACTIVE | test=192.168.112.192, 192.168.200.182 | N/A (booted from volume) | SCS-1L-1 | 2026-02-23 21:08:13.052015 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-23 21:08:13.293013 | orchestrator | + openstack --os-cloud test server show test 2026-02-23 21:08:16.428186 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:16.428242 | orchestrator | | Field | Value | 2026-02-23 21:08:16.428250 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:16.428259 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-23 21:08:16.428265 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-23 21:08:16.428271 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-23 21:08:16.428287 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-23 21:08:16.428296 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-23 21:08:16.428302 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-23 21:08:16.428316 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-23 21:08:16.428322 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-23 21:08:16.428328 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-23 21:08:16.428355 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-23 21:08:16.428359 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-23 21:08:16.428362 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-23 21:08:16.428366 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-23 21:08:16.428372 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-23 21:08:16.428376 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-23 21:08:16.428380 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-23T21:06:58.000000 | 2026-02-23 21:08:16.428386 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-23 21:08:16.428390 | orchestrator | | accessIPv4 | | 2026-02-23 21:08:16.428393 | orchestrator | | accessIPv6 | | 2026-02-23 21:08:16.428396 | orchestrator | | addresses | test=192.168.112.192, 192.168.200.182 | 2026-02-23 21:08:16.428403 | orchestrator | | config_drive | | 2026-02-23 21:08:16.428407 | orchestrator | | created | 2026-02-23T21:06:32Z | 2026-02-23 21:08:16.428412 | orchestrator | | description | None | 2026-02-23 21:08:16.428416 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-23 21:08:16.428421 | orchestrator | | hostId | 02561d7cb04280b242859a145f1324e35b6ee5a5ffc5185d38575a06 | 2026-02-23 21:08:16.428425 | orchestrator | | host_status | None | 2026-02-23 21:08:16.428431 | orchestrator | | id | 8706d052-6ae2-4461-a5fe-5d9008721ccb | 2026-02-23 21:08:16.428434 | orchestrator | | image | N/A (booted from volume) | 2026-02-23 21:08:16.428438 | orchestrator | | key_name | test | 2026-02-23 21:08:16.428441 | orchestrator | | locked | False | 2026-02-23 21:08:16.428444 | orchestrator | | locked_reason | None | 2026-02-23 21:08:16.428450 | orchestrator | | name | test | 2026-02-23 21:08:16.428453 | orchestrator | | pinned_availability_zone | None | 2026-02-23 21:08:16.428457 | orchestrator | | progress | 0 | 2026-02-23 21:08:16.428462 | orchestrator | | project_id | 86a8c6da97ab4056bb51a55fff723f51 | 2026-02-23 21:08:16.428466 | orchestrator | | properties | hostname='test' | 2026-02-23 21:08:16.428471 | orchestrator | | security_groups | name='ssh' | 2026-02-23 21:08:16.428475 | orchestrator | | | name='icmp' | 2026-02-23 21:08:16.428478 | orchestrator | | server_groups | None | 2026-02-23 21:08:16.428482 | orchestrator | | status | ACTIVE | 2026-02-23 21:08:16.428489 | orchestrator | | tags | test | 2026-02-23 21:08:16.428492 | orchestrator | | trusted_image_certificates | None | 2026-02-23 21:08:16.428496 | orchestrator | | updated | 2026-02-23T21:07:21Z | 2026-02-23 21:08:16.428499 | orchestrator | | user_id | f62d11d3164b4cd1a4577ca518279c58 | 2026-02-23 21:08:16.428504 | orchestrator | | volumes_attached | delete_on_termination='True', id='b28eddd2-7b6c-41e5-87be-ccce4f9b5f7e' | 2026-02-23 21:08:16.428508 | orchestrator | | | delete_on_termination='False', id='0bd3d72d-cb23-4f2c-9c04-365f545698e4' | 2026-02-23 21:08:16.431859 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:16.680321 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-23 21:08:19.905388 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:19.905479 | orchestrator | | Field | Value | 2026-02-23 21:08:19.905520 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:19.905527 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-23 21:08:19.905534 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-23 21:08:19.905541 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-23 21:08:19.905548 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-23 21:08:19.905568 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-23 21:08:19.905575 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-23 21:08:19.905597 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-23 21:08:19.905604 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-23 21:08:19.905610 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-23 21:08:19.905623 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-23 21:08:19.905629 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-23 21:08:19.905636 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-23 21:08:19.905642 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-23 21:08:19.905648 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-23 21:08:19.905658 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-23 21:08:19.905665 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-23T21:06:57.000000 | 2026-02-23 21:08:19.905677 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-23 21:08:19.905684 | orchestrator | | accessIPv4 | | 2026-02-23 21:08:19.905696 | orchestrator | | accessIPv6 | | 2026-02-23 21:08:19.905702 | orchestrator | | addresses | test=192.168.112.161, 192.168.200.112 | 2026-02-23 21:08:19.905717 | orchestrator | | config_drive | | 2026-02-23 21:08:19.905724 | orchestrator | | created | 2026-02-23T21:06:32Z | 2026-02-23 21:08:19.905729 | orchestrator | | description | None | 2026-02-23 21:08:19.905735 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-23 21:08:19.905741 | orchestrator | | hostId | 02561d7cb04280b242859a145f1324e35b6ee5a5ffc5185d38575a06 | 2026-02-23 21:08:19.905747 | orchestrator | | host_status | None | 2026-02-23 21:08:19.905765 | orchestrator | | id | 2b54588c-a537-47da-a043-7a20a09aefd8 | 2026-02-23 21:08:19.905777 | orchestrator | | image | N/A (booted from volume) | 2026-02-23 21:08:19.905783 | orchestrator | | key_name | test | 2026-02-23 21:08:19.905816 | orchestrator | | locked | False | 2026-02-23 21:08:19.905823 | orchestrator | | locked_reason | None | 2026-02-23 21:08:19.905829 | orchestrator | | name | test-1 | 2026-02-23 21:08:19.905835 | orchestrator | | pinned_availability_zone | None | 2026-02-23 21:08:19.905846 | orchestrator | | progress | 0 | 2026-02-23 21:08:19.905852 | orchestrator | | project_id | 86a8c6da97ab4056bb51a55fff723f51 | 2026-02-23 21:08:19.905857 | orchestrator | | properties | hostname='test-1' | 2026-02-23 21:08:19.905880 | orchestrator | | security_groups | name='ssh' | 2026-02-23 21:08:19.905887 | orchestrator | | | name='icmp' | 2026-02-23 21:08:19.905893 | orchestrator | | server_groups | None | 2026-02-23 21:08:19.905900 | orchestrator | | status | ACTIVE | 2026-02-23 21:08:19.905906 | orchestrator | | tags | test | 2026-02-23 21:08:19.905912 | orchestrator | | trusted_image_certificates | None | 2026-02-23 21:08:19.905919 | orchestrator | | updated | 2026-02-23T21:07:21Z | 2026-02-23 21:08:19.905929 | orchestrator | | user_id | f62d11d3164b4cd1a4577ca518279c58 | 2026-02-23 21:08:19.905936 | orchestrator | | volumes_attached | delete_on_termination='True', id='cf0ab259-6c81-4824-bbea-121849d11309' | 2026-02-23 21:08:19.909555 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:20.188914 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-23 21:08:23.099969 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:23.100021 | orchestrator | | Field | Value | 2026-02-23 21:08:23.100027 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:23.100032 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-23 21:08:23.100036 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-23 21:08:23.100039 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-23 21:08:23.100043 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-23 21:08:23.100055 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-23 21:08:23.100060 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-23 21:08:23.100086 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-23 21:08:23.100091 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-23 21:08:23.100095 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-23 21:08:23.100099 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-23 21:08:23.100103 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-23 21:08:23.100107 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-23 21:08:23.100110 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-23 21:08:23.100114 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-23 21:08:23.100120 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-23 21:08:23.100128 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-23T21:06:57.000000 | 2026-02-23 21:08:23.100134 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-23 21:08:23.100138 | orchestrator | | accessIPv4 | | 2026-02-23 21:08:23.100142 | orchestrator | | accessIPv6 | | 2026-02-23 21:08:23.100146 | orchestrator | | addresses | test=192.168.112.172, 192.168.200.207 | 2026-02-23 21:08:23.100152 | orchestrator | | config_drive | | 2026-02-23 21:08:23.100159 | orchestrator | | created | 2026-02-23T21:06:33Z | 2026-02-23 21:08:23.100166 | orchestrator | | description | None | 2026-02-23 21:08:23.100176 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-23 21:08:23.100190 | orchestrator | | hostId | e00d2172e118f6c91de80732e2028cbeff50dc5429ee3413ede7e474 | 2026-02-23 21:08:23.100197 | orchestrator | | host_status | None | 2026-02-23 21:08:23.100208 | orchestrator | | id | 9ea23385-fc20-43a9-adbc-02b35774482d | 2026-02-23 21:08:23.100215 | orchestrator | | image | N/A (booted from volume) | 2026-02-23 21:08:23.100221 | orchestrator | | key_name | test | 2026-02-23 21:08:23.100227 | orchestrator | | locked | False | 2026-02-23 21:08:23.100234 | orchestrator | | locked_reason | None | 2026-02-23 21:08:23.100240 | orchestrator | | name | test-2 | 2026-02-23 21:08:23.100247 | orchestrator | | pinned_availability_zone | None | 2026-02-23 21:08:23.100257 | orchestrator | | progress | 0 | 2026-02-23 21:08:23.100263 | orchestrator | | project_id | 86a8c6da97ab4056bb51a55fff723f51 | 2026-02-23 21:08:23.100269 | orchestrator | | properties | hostname='test-2' | 2026-02-23 21:08:23.100281 | orchestrator | | security_groups | name='ssh' | 2026-02-23 21:08:23.100288 | orchestrator | | | name='icmp' | 2026-02-23 21:08:23.100296 | orchestrator | | server_groups | None | 2026-02-23 21:08:23.100303 | orchestrator | | status | ACTIVE | 2026-02-23 21:08:23.100316 | orchestrator | | tags | test | 2026-02-23 21:08:23.100357 | orchestrator | | trusted_image_certificates | None | 2026-02-23 21:08:23.100365 | orchestrator | | updated | 2026-02-23T21:07:22Z | 2026-02-23 21:08:23.100376 | orchestrator | | user_id | f62d11d3164b4cd1a4577ca518279c58 | 2026-02-23 21:08:23.100382 | orchestrator | | volumes_attached | delete_on_termination='True', id='c95fab61-de91-4259-8435-3f3b0ab6b445' | 2026-02-23 21:08:23.106857 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:23.345310 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-23 21:08:26.024374 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:26.024424 | orchestrator | | Field | Value | 2026-02-23 21:08:26.024430 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:26.024434 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-23 21:08:26.024438 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-23 21:08:26.024442 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-23 21:08:26.024456 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-23 21:08:26.024460 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-23 21:08:26.024470 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-23 21:08:26.024483 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-23 21:08:26.024487 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-23 21:08:26.024490 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-23 21:08:26.024494 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-23 21:08:26.024498 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-23 21:08:26.024502 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-23 21:08:26.024509 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-23 21:08:26.024512 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-23 21:08:26.024518 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-23 21:08:26.024522 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-23T21:06:57.000000 | 2026-02-23 21:08:26.024528 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-23 21:08:26.024532 | orchestrator | | accessIPv4 | | 2026-02-23 21:08:26.024536 | orchestrator | | accessIPv6 | | 2026-02-23 21:08:26.024540 | orchestrator | | addresses | test=192.168.112.100, 192.168.200.56 | 2026-02-23 21:08:26.024543 | orchestrator | | config_drive | | 2026-02-23 21:08:26.024550 | orchestrator | | created | 2026-02-23T21:06:33Z | 2026-02-23 21:08:26.024554 | orchestrator | | description | None | 2026-02-23 21:08:26.024557 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-23 21:08:26.024563 | orchestrator | | hostId | e00d2172e118f6c91de80732e2028cbeff50dc5429ee3413ede7e474 | 2026-02-23 21:08:26.024567 | orchestrator | | host_status | None | 2026-02-23 21:08:26.024574 | orchestrator | | id | 95eb0896-2059-42bc-afda-7157fd138a76 | 2026-02-23 21:08:26.024578 | orchestrator | | image | N/A (booted from volume) | 2026-02-23 21:08:26.024582 | orchestrator | | key_name | test | 2026-02-23 21:08:26.024585 | orchestrator | | locked | False | 2026-02-23 21:08:26.024592 | orchestrator | | locked_reason | None | 2026-02-23 21:08:26.024595 | orchestrator | | name | test-3 | 2026-02-23 21:08:26.024599 | orchestrator | | pinned_availability_zone | None | 2026-02-23 21:08:26.024603 | orchestrator | | progress | 0 | 2026-02-23 21:08:26.024608 | orchestrator | | project_id | 86a8c6da97ab4056bb51a55fff723f51 | 2026-02-23 21:08:26.024612 | orchestrator | | properties | hostname='test-3' | 2026-02-23 21:08:26.024619 | orchestrator | | security_groups | name='ssh' | 2026-02-23 21:08:26.024623 | orchestrator | | | name='icmp' | 2026-02-23 21:08:26.024626 | orchestrator | | server_groups | None | 2026-02-23 21:08:26.024630 | orchestrator | | status | ACTIVE | 2026-02-23 21:08:26.024636 | orchestrator | | tags | test | 2026-02-23 21:08:26.024640 | orchestrator | | trusted_image_certificates | None | 2026-02-23 21:08:26.024644 | orchestrator | | updated | 2026-02-23T21:07:23Z | 2026-02-23 21:08:26.024647 | orchestrator | | user_id | f62d11d3164b4cd1a4577ca518279c58 | 2026-02-23 21:08:26.024653 | orchestrator | | volumes_attached | delete_on_termination='True', id='85cbf3f3-9d01-4f02-b3fb-b206f1ed6a81' | 2026-02-23 21:08:26.029021 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:26.281307 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-23 21:08:29.215558 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:29.215654 | orchestrator | | Field | Value | 2026-02-23 21:08:29.215664 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:29.215688 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-23 21:08:29.215696 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-23 21:08:29.215704 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-23 21:08:29.215711 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-23 21:08:29.215719 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-23 21:08:29.215726 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-23 21:08:29.215747 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-23 21:08:29.215754 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-23 21:08:29.215758 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-23 21:08:29.215767 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-23 21:08:29.215771 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-23 21:08:29.215775 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-23 21:08:29.215779 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-23 21:08:29.216011 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-23 21:08:29.216025 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-23 21:08:29.216032 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-23T21:06:57.000000 | 2026-02-23 21:08:29.216046 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-23 21:08:29.216052 | orchestrator | | accessIPv4 | | 2026-02-23 21:08:29.216064 | orchestrator | | accessIPv6 | | 2026-02-23 21:08:29.216070 | orchestrator | | addresses | test=192.168.112.166, 192.168.200.3 | 2026-02-23 21:08:29.216075 | orchestrator | | config_drive | | 2026-02-23 21:08:29.216081 | orchestrator | | created | 2026-02-23T21:06:34Z | 2026-02-23 21:08:29.216092 | orchestrator | | description | None | 2026-02-23 21:08:29.216099 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-23 21:08:29.216106 | orchestrator | | hostId | e00d2172e118f6c91de80732e2028cbeff50dc5429ee3413ede7e474 | 2026-02-23 21:08:29.216112 | orchestrator | | host_status | None | 2026-02-23 21:08:29.216125 | orchestrator | | id | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | 2026-02-23 21:08:29.216137 | orchestrator | | image | N/A (booted from volume) | 2026-02-23 21:08:29.216144 | orchestrator | | key_name | test | 2026-02-23 21:08:29.216150 | orchestrator | | locked | False | 2026-02-23 21:08:29.216157 | orchestrator | | locked_reason | None | 2026-02-23 21:08:29.216164 | orchestrator | | name | test-4 | 2026-02-23 21:08:29.216174 | orchestrator | | pinned_availability_zone | None | 2026-02-23 21:08:29.216178 | orchestrator | | progress | 0 | 2026-02-23 21:08:29.216183 | orchestrator | | project_id | 86a8c6da97ab4056bb51a55fff723f51 | 2026-02-23 21:08:29.216188 | orchestrator | | properties | hostname='test-4' | 2026-02-23 21:08:29.216198 | orchestrator | | security_groups | name='ssh' | 2026-02-23 21:08:29.216206 | orchestrator | | | name='icmp' | 2026-02-23 21:08:29.216211 | orchestrator | | server_groups | None | 2026-02-23 21:08:29.216215 | orchestrator | | status | ACTIVE | 2026-02-23 21:08:29.216220 | orchestrator | | tags | test | 2026-02-23 21:08:29.216224 | orchestrator | | trusted_image_certificates | None | 2026-02-23 21:08:29.216232 | orchestrator | | updated | 2026-02-23T21:07:24Z | 2026-02-23 21:08:29.216240 | orchestrator | | user_id | f62d11d3164b4cd1a4577ca518279c58 | 2026-02-23 21:08:29.216246 | orchestrator | | volumes_attached | delete_on_termination='True', id='4f488191-3e28-4ece-ab8a-3379f0f84200' | 2026-02-23 21:08:29.219266 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-23 21:08:29.494500 | orchestrator | + server_ping 2026-02-23 21:08:29.495107 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-23 21:08:29.495142 | orchestrator | ++ tr -d '\r' 2026-02-23 21:08:32.253422 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:08:32.253511 | orchestrator | + ping -c3 192.168.112.100 2026-02-23 21:08:32.266453 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-02-23 21:08:32.266542 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=4.63 ms 2026-02-23 21:08:33.267127 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=3.36 ms 2026-02-23 21:08:34.266407 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.12 ms 2026-02-23 21:08:34.266464 | orchestrator | 2026-02-23 21:08:34.266473 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-02-23 21:08:34.266480 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-23 21:08:34.266487 | orchestrator | rtt min/avg/max/mdev = 1.115/3.033/4.630/1.452 ms 2026-02-23 21:08:34.267770 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:08:34.267815 | orchestrator | + ping -c3 192.168.112.192 2026-02-23 21:08:34.277287 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-02-23 21:08:34.277365 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=5.00 ms 2026-02-23 21:08:35.275152 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.49 ms 2026-02-23 21:08:36.276853 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.23 ms 2026-02-23 21:08:36.276908 | orchestrator | 2026-02-23 21:08:36.276914 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-02-23 21:08:36.276919 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:08:36.276924 | orchestrator | rtt min/avg/max/mdev = 1.230/2.575/5.004/1.720 ms 2026-02-23 21:08:36.277630 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:08:36.277673 | orchestrator | + ping -c3 192.168.112.166 2026-02-23 21:08:36.287273 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-02-23 21:08:36.287367 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.33 ms 2026-02-23 21:08:37.286756 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.71 ms 2026-02-23 21:08:38.288239 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.83 ms 2026-02-23 21:08:38.288346 | orchestrator | 2026-02-23 21:08:38.288359 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-02-23 21:08:38.288367 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:08:38.288374 | orchestrator | rtt min/avg/max/mdev = 1.827/3.286/5.325/1.485 ms 2026-02-23 21:08:38.288382 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:08:38.288388 | orchestrator | + ping -c3 192.168.112.172 2026-02-23 21:08:38.301223 | orchestrator | PING 192.168.112.172 (192.168.112.172) 56(84) bytes of data. 2026-02-23 21:08:38.301426 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=1 ttl=63 time=8.19 ms 2026-02-23 21:08:39.297007 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=2 ttl=63 time=2.61 ms 2026-02-23 21:08:40.298570 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=3 ttl=63 time=2.47 ms 2026-02-23 21:08:40.298665 | orchestrator | 2026-02-23 21:08:40.298689 | orchestrator | --- 192.168.112.172 ping statistics --- 2026-02-23 21:08:40.298698 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:08:40.298705 | orchestrator | rtt min/avg/max/mdev = 2.473/4.423/8.191/2.664 ms 2026-02-23 21:08:40.299368 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:08:40.299431 | orchestrator | + ping -c3 192.168.112.161 2026-02-23 21:08:40.311538 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-02-23 21:08:40.311672 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=7.35 ms 2026-02-23 21:08:41.308066 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.29 ms 2026-02-23 21:08:42.308814 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.48 ms 2026-02-23 21:08:42.308868 | orchestrator | 2026-02-23 21:08:42.308877 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-02-23 21:08:42.308884 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:08:42.308890 | orchestrator | rtt min/avg/max/mdev = 1.481/3.704/7.346/2.596 ms 2026-02-23 21:08:42.309670 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-23 21:08:42.309697 | orchestrator | + compute_list 2026-02-23 21:08:42.309705 | orchestrator | + osism manage compute list testbed-node-3 2026-02-23 21:08:44.307422 | orchestrator | 2026-02-23 21:08:44 | ERROR  | Unable to get ansible vault password 2026-02-23 21:08:44.307473 | orchestrator | 2026-02-23 21:08:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:08:44.307483 | orchestrator | 2026-02-23 21:08:44 | ERROR  | Dropping encrypted entries 2026-02-23 21:08:45.686343 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:08:45.686395 | orchestrator | | ID | Name | Status | 2026-02-23 21:08:45.686400 | orchestrator | |--------------------------------------+--------+----------| 2026-02-23 21:08:45.686404 | orchestrator | | 2b54588c-a537-47da-a043-7a20a09aefd8 | test-1 | ACTIVE | 2026-02-23 21:08:45.686408 | orchestrator | | 8706d052-6ae2-4461-a5fe-5d9008721ccb | test | ACTIVE | 2026-02-23 21:08:45.686412 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:08:45.990850 | orchestrator | + osism manage compute list testbed-node-4 2026-02-23 21:08:48.045206 | orchestrator | 2026-02-23 21:08:48 | ERROR  | Unable to get ansible vault password 2026-02-23 21:08:48.045347 | orchestrator | 2026-02-23 21:08:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:08:48.045362 | orchestrator | 2026-02-23 21:08:48 | ERROR  | Dropping encrypted entries 2026-02-23 21:08:49.710355 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:08:49.710448 | orchestrator | | ID | Name | Status | 2026-02-23 21:08:49.710458 | orchestrator | |--------------------------------------+--------+----------| 2026-02-23 21:08:49.710466 | orchestrator | | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | test-4 | ACTIVE | 2026-02-23 21:08:49.710473 | orchestrator | | 95eb0896-2059-42bc-afda-7157fd138a76 | test-3 | ACTIVE | 2026-02-23 21:08:49.710479 | orchestrator | | 9ea23385-fc20-43a9-adbc-02b35774482d | test-2 | ACTIVE | 2026-02-23 21:08:49.710486 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:08:50.032372 | orchestrator | + osism manage compute list testbed-node-5 2026-02-23 21:08:52.015551 | orchestrator | 2026-02-23 21:08:52 | ERROR  | Unable to get ansible vault password 2026-02-23 21:08:52.015647 | orchestrator | 2026-02-23 21:08:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:08:52.015657 | orchestrator | 2026-02-23 21:08:52 | ERROR  | Dropping encrypted entries 2026-02-23 21:08:52.895386 | orchestrator | +------+--------+----------+ 2026-02-23 21:08:52.895489 | orchestrator | | ID | Name | Status | 2026-02-23 21:08:52.895500 | orchestrator | |------+--------+----------| 2026-02-23 21:08:52.895507 | orchestrator | +------+--------+----------+ 2026-02-23 21:08:53.192426 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-02-23 21:08:55.241266 | orchestrator | 2026-02-23 21:08:55 | ERROR  | Unable to get ansible vault password 2026-02-23 21:08:55.241384 | orchestrator | 2026-02-23 21:08:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:08:55.241393 | orchestrator | 2026-02-23 21:08:55 | ERROR  | Dropping encrypted entries 2026-02-23 21:08:56.553971 | orchestrator | 2026-02-23 21:08:56 | INFO  | Live migrating server d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 2026-02-23 21:09:09.279186 | orchestrator | 2026-02-23 21:09:09 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:11.646792 | orchestrator | 2026-02-23 21:09:11 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:14.009335 | orchestrator | 2026-02-23 21:09:14 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:16.243641 | orchestrator | 2026-02-23 21:09:16 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:18.506285 | orchestrator | 2026-02-23 21:09:18 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:20.845800 | orchestrator | 2026-02-23 21:09:20 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:23.125112 | orchestrator | 2026-02-23 21:09:23 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:25.382524 | orchestrator | 2026-02-23 21:09:25 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:27.620253 | orchestrator | 2026-02-23 21:09:27 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:09:29.895967 | orchestrator | 2026-02-23 21:09:29 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) completed with status ACTIVE 2026-02-23 21:09:29.896039 | orchestrator | 2026-02-23 21:09:29 | INFO  | Live migrating server 95eb0896-2059-42bc-afda-7157fd138a76 2026-02-23 21:09:41.712383 | orchestrator | 2026-02-23 21:09:41 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:44.046494 | orchestrator | 2026-02-23 21:09:44 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:46.395409 | orchestrator | 2026-02-23 21:09:46 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:48.649265 | orchestrator | 2026-02-23 21:09:48 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:51.188088 | orchestrator | 2026-02-23 21:09:51 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:53.476273 | orchestrator | 2026-02-23 21:09:53 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:55.876290 | orchestrator | 2026-02-23 21:09:55 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:09:58.441400 | orchestrator | 2026-02-23 21:09:58 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:10:00.867509 | orchestrator | 2026-02-23 21:10:00 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) completed with status ACTIVE 2026-02-23 21:10:00.867605 | orchestrator | 2026-02-23 21:10:00 | INFO  | Live migrating server 9ea23385-fc20-43a9-adbc-02b35774482d 2026-02-23 21:10:12.941305 | orchestrator | 2026-02-23 21:10:12 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:15.217293 | orchestrator | 2026-02-23 21:10:15 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:17.574836 | orchestrator | 2026-02-23 21:10:17 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:19.920348 | orchestrator | 2026-02-23 21:10:19 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:22.132763 | orchestrator | 2026-02-23 21:10:22 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:24.390432 | orchestrator | 2026-02-23 21:10:24 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:26.663789 | orchestrator | 2026-02-23 21:10:26 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:28.895521 | orchestrator | 2026-02-23 21:10:28 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:10:31.265765 | orchestrator | 2026-02-23 21:10:31 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) completed with status ACTIVE 2026-02-23 21:10:31.565564 | orchestrator | + compute_list 2026-02-23 21:10:31.565613 | orchestrator | + osism manage compute list testbed-node-3 2026-02-23 21:10:33.532609 | orchestrator | 2026-02-23 21:10:33 | ERROR  | Unable to get ansible vault password 2026-02-23 21:10:33.532670 | orchestrator | 2026-02-23 21:10:33 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:10:33.532680 | orchestrator | 2026-02-23 21:10:33 | ERROR  | Dropping encrypted entries 2026-02-23 21:10:34.782719 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:10:34.782782 | orchestrator | | ID | Name | Status | 2026-02-23 21:10:34.782790 | orchestrator | |--------------------------------------+--------+----------| 2026-02-23 21:10:34.782796 | orchestrator | | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | test-4 | ACTIVE | 2026-02-23 21:10:34.782802 | orchestrator | | 95eb0896-2059-42bc-afda-7157fd138a76 | test-3 | ACTIVE | 2026-02-23 21:10:34.782808 | orchestrator | | 9ea23385-fc20-43a9-adbc-02b35774482d | test-2 | ACTIVE | 2026-02-23 21:10:34.782824 | orchestrator | | 2b54588c-a537-47da-a043-7a20a09aefd8 | test-1 | ACTIVE | 2026-02-23 21:10:34.782831 | orchestrator | | 8706d052-6ae2-4461-a5fe-5d9008721ccb | test | ACTIVE | 2026-02-23 21:10:34.782837 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:10:35.075613 | orchestrator | + osism manage compute list testbed-node-4 2026-02-23 21:10:37.081894 | orchestrator | 2026-02-23 21:10:37 | ERROR  | Unable to get ansible vault password 2026-02-23 21:10:37.081992 | orchestrator | 2026-02-23 21:10:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:10:37.082004 | orchestrator | 2026-02-23 21:10:37 | ERROR  | Dropping encrypted entries 2026-02-23 21:10:37.929189 | orchestrator | +------+--------+----------+ 2026-02-23 21:10:37.929286 | orchestrator | | ID | Name | Status | 2026-02-23 21:10:37.929294 | orchestrator | |------+--------+----------| 2026-02-23 21:10:37.929298 | orchestrator | +------+--------+----------+ 2026-02-23 21:10:38.254602 | orchestrator | + osism manage compute list testbed-node-5 2026-02-23 21:10:40.288010 | orchestrator | 2026-02-23 21:10:40 | ERROR  | Unable to get ansible vault password 2026-02-23 21:10:40.288121 | orchestrator | 2026-02-23 21:10:40 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:10:40.288134 | orchestrator | 2026-02-23 21:10:40 | ERROR  | Dropping encrypted entries 2026-02-23 21:10:41.059166 | orchestrator | +------+--------+----------+ 2026-02-23 21:10:41.059237 | orchestrator | | ID | Name | Status | 2026-02-23 21:10:41.059243 | orchestrator | |------+--------+----------| 2026-02-23 21:10:41.059273 | orchestrator | +------+--------+----------+ 2026-02-23 21:10:41.372661 | orchestrator | + server_ping 2026-02-23 21:10:41.374373 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-23 21:10:41.374438 | orchestrator | ++ tr -d '\r' 2026-02-23 21:10:44.096667 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:10:44.096764 | orchestrator | + ping -c3 192.168.112.100 2026-02-23 21:10:44.104875 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-02-23 21:10:44.104946 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=6.39 ms 2026-02-23 21:10:45.102006 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=1.77 ms 2026-02-23 21:10:46.105421 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.55 ms 2026-02-23 21:10:46.105508 | orchestrator | 2026-02-23 21:10:46.105517 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-02-23 21:10:46.105523 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-23 21:10:46.105527 | orchestrator | rtt min/avg/max/mdev = 1.765/3.565/6.386/2.019 ms 2026-02-23 21:10:46.105532 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:10:46.105537 | orchestrator | + ping -c3 192.168.112.192 2026-02-23 21:10:46.116344 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-02-23 21:10:46.116522 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=5.50 ms 2026-02-23 21:10:47.115363 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.12 ms 2026-02-23 21:10:48.116487 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.75 ms 2026-02-23 21:10:48.116565 | orchestrator | 2026-02-23 21:10:48.116571 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-02-23 21:10:48.116576 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:10:48.116581 | orchestrator | rtt min/avg/max/mdev = 1.750/3.123/5.503/1.689 ms 2026-02-23 21:10:48.116586 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:10:48.116590 | orchestrator | + ping -c3 192.168.112.166 2026-02-23 21:10:48.126836 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-02-23 21:10:48.126948 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.74 ms 2026-02-23 21:10:49.126263 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.41 ms 2026-02-23 21:10:50.127229 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.79 ms 2026-02-23 21:10:50.127300 | orchestrator | 2026-02-23 21:10:50.127312 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-02-23 21:10:50.127319 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-23 21:10:50.127325 | orchestrator | rtt min/avg/max/mdev = 1.792/3.313/5.740/1.734 ms 2026-02-23 21:10:50.127332 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:10:50.127339 | orchestrator | + ping -c3 192.168.112.172 2026-02-23 21:10:50.137431 | orchestrator | PING 192.168.112.172 (192.168.112.172) 56(84) bytes of data. 2026-02-23 21:10:50.137513 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=1 ttl=63 time=6.50 ms 2026-02-23 21:10:51.134944 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=2 ttl=63 time=2.16 ms 2026-02-23 21:10:52.136646 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=3 ttl=63 time=1.90 ms 2026-02-23 21:10:52.136718 | orchestrator | 2026-02-23 21:10:52.136725 | orchestrator | --- 192.168.112.172 ping statistics --- 2026-02-23 21:10:52.136731 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:10:52.136735 | orchestrator | rtt min/avg/max/mdev = 1.897/3.518/6.503/2.112 ms 2026-02-23 21:10:52.136741 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:10:52.136746 | orchestrator | + ping -c3 192.168.112.161 2026-02-23 21:10:52.145888 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-02-23 21:10:52.145972 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=4.13 ms 2026-02-23 21:10:53.145576 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=1.97 ms 2026-02-23 21:10:54.146338 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.73 ms 2026-02-23 21:10:54.146418 | orchestrator | 2026-02-23 21:10:54.146426 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-02-23 21:10:54.146433 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:10:54.146453 | orchestrator | rtt min/avg/max/mdev = 1.733/2.609/4.130/1.079 ms 2026-02-23 21:10:54.147438 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-02-23 21:10:56.425639 | orchestrator | 2026-02-23 21:10:56 | ERROR  | Unable to get ansible vault password 2026-02-23 21:10:56.425743 | orchestrator | 2026-02-23 21:10:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:10:56.425754 | orchestrator | 2026-02-23 21:10:56 | ERROR  | Dropping encrypted entries 2026-02-23 21:10:57.203370 | orchestrator | 2026-02-23 21:10:57 | INFO  | No migratable instances found on node testbed-node-5 2026-02-23 21:10:57.551745 | orchestrator | + compute_list 2026-02-23 21:10:57.551855 | orchestrator | + osism manage compute list testbed-node-3 2026-02-23 21:10:59.713206 | orchestrator | 2026-02-23 21:10:59 | ERROR  | Unable to get ansible vault password 2026-02-23 21:10:59.713297 | orchestrator | 2026-02-23 21:10:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:10:59.713309 | orchestrator | 2026-02-23 21:10:59 | ERROR  | Dropping encrypted entries 2026-02-23 21:11:01.003889 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:11:01.003961 | orchestrator | | ID | Name | Status | 2026-02-23 21:11:01.003968 | orchestrator | |--------------------------------------+--------+----------| 2026-02-23 21:11:01.003972 | orchestrator | | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | test-4 | ACTIVE | 2026-02-23 21:11:01.003977 | orchestrator | | 95eb0896-2059-42bc-afda-7157fd138a76 | test-3 | ACTIVE | 2026-02-23 21:11:01.003982 | orchestrator | | 9ea23385-fc20-43a9-adbc-02b35774482d | test-2 | ACTIVE | 2026-02-23 21:11:01.003986 | orchestrator | | 2b54588c-a537-47da-a043-7a20a09aefd8 | test-1 | ACTIVE | 2026-02-23 21:11:01.003991 | orchestrator | | 8706d052-6ae2-4461-a5fe-5d9008721ccb | test | ACTIVE | 2026-02-23 21:11:01.003995 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:11:01.366603 | orchestrator | + osism manage compute list testbed-node-4 2026-02-23 21:11:03.488842 | orchestrator | 2026-02-23 21:11:03 | ERROR  | Unable to get ansible vault password 2026-02-23 21:11:03.488932 | orchestrator | 2026-02-23 21:11:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:11:03.488946 | orchestrator | 2026-02-23 21:11:03 | ERROR  | Dropping encrypted entries 2026-02-23 21:11:04.288736 | orchestrator | +------+--------+----------+ 2026-02-23 21:11:04.288821 | orchestrator | | ID | Name | Status | 2026-02-23 21:11:04.288830 | orchestrator | |------+--------+----------| 2026-02-23 21:11:04.288838 | orchestrator | +------+--------+----------+ 2026-02-23 21:11:04.628612 | orchestrator | + osism manage compute list testbed-node-5 2026-02-23 21:11:07.015712 | orchestrator | 2026-02-23 21:11:07 | ERROR  | Unable to get ansible vault password 2026-02-23 21:11:07.015791 | orchestrator | 2026-02-23 21:11:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:11:07.015801 | orchestrator | 2026-02-23 21:11:07 | ERROR  | Dropping encrypted entries 2026-02-23 21:11:07.816461 | orchestrator | +------+--------+----------+ 2026-02-23 21:11:07.816547 | orchestrator | | ID | Name | Status | 2026-02-23 21:11:07.816558 | orchestrator | |------+--------+----------| 2026-02-23 21:11:07.816563 | orchestrator | +------+--------+----------+ 2026-02-23 21:11:08.231762 | orchestrator | + server_ping 2026-02-23 21:11:08.233240 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-23 21:11:08.233362 | orchestrator | ++ tr -d '\r' 2026-02-23 21:11:11.204454 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:11:11.204626 | orchestrator | + ping -c3 192.168.112.100 2026-02-23 21:11:11.216570 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-02-23 21:11:11.216652 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=6.22 ms 2026-02-23 21:11:12.213933 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.07 ms 2026-02-23 21:11:13.214867 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.61 ms 2026-02-23 21:11:13.215142 | orchestrator | 2026-02-23 21:11:13.215165 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-02-23 21:11:13.215173 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:11:13.215181 | orchestrator | rtt min/avg/max/mdev = 1.612/3.301/6.220/2.072 ms 2026-02-23 21:11:13.215743 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:11:13.215762 | orchestrator | + ping -c3 192.168.112.192 2026-02-23 21:11:13.228656 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-02-23 21:11:13.228733 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=8.61 ms 2026-02-23 21:11:14.224224 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.31 ms 2026-02-23 21:11:15.225231 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.83 ms 2026-02-23 21:11:15.225323 | orchestrator | 2026-02-23 21:11:15.225335 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-02-23 21:11:15.225343 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:11:15.225349 | orchestrator | rtt min/avg/max/mdev = 1.826/4.248/8.613/3.092 ms 2026-02-23 21:11:15.225502 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:11:15.225515 | orchestrator | + ping -c3 192.168.112.166 2026-02-23 21:11:15.236460 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-02-23 21:11:15.236538 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.59 ms 2026-02-23 21:11:16.234556 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=1.98 ms 2026-02-23 21:11:17.235241 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.60 ms 2026-02-23 21:11:17.235327 | orchestrator | 2026-02-23 21:11:17.235337 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-02-23 21:11:17.235345 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:11:17.235353 | orchestrator | rtt min/avg/max/mdev = 1.595/3.054/5.590/1.799 ms 2026-02-23 21:11:17.236292 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:11:17.236328 | orchestrator | + ping -c3 192.168.112.172 2026-02-23 21:11:17.245832 | orchestrator | PING 192.168.112.172 (192.168.112.172) 56(84) bytes of data. 2026-02-23 21:11:17.245918 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=1 ttl=63 time=5.78 ms 2026-02-23 21:11:18.244009 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=2 ttl=63 time=1.93 ms 2026-02-23 21:11:19.245383 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=3 ttl=63 time=1.50 ms 2026-02-23 21:11:19.246201 | orchestrator | 2026-02-23 21:11:19.246247 | orchestrator | --- 192.168.112.172 ping statistics --- 2026-02-23 21:11:19.246257 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:11:19.246265 | orchestrator | rtt min/avg/max/mdev = 1.504/3.072/5.783/1.924 ms 2026-02-23 21:11:19.246273 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:11:19.246279 | orchestrator | + ping -c3 192.168.112.161 2026-02-23 21:11:19.254228 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-02-23 21:11:19.254293 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=4.90 ms 2026-02-23 21:11:20.253408 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=1.82 ms 2026-02-23 21:11:21.255168 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.76 ms 2026-02-23 21:11:21.255279 | orchestrator | 2026-02-23 21:11:21.255291 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-02-23 21:11:21.255297 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:11:21.255302 | orchestrator | rtt min/avg/max/mdev = 1.756/2.825/4.897/1.465 ms 2026-02-23 21:11:21.255803 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-02-23 21:11:23.381914 | orchestrator | 2026-02-23 21:11:23 | ERROR  | Unable to get ansible vault password 2026-02-23 21:11:23.381963 | orchestrator | 2026-02-23 21:11:23 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:11:23.381970 | orchestrator | 2026-02-23 21:11:23 | ERROR  | Dropping encrypted entries 2026-02-23 21:11:24.484262 | orchestrator | 2026-02-23 21:11:24 | INFO  | Live migrating server d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 2026-02-23 21:11:34.868794 | orchestrator | 2026-02-23 21:11:34 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:37.162080 | orchestrator | 2026-02-23 21:11:37 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:39.570903 | orchestrator | 2026-02-23 21:11:39 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:41.813402 | orchestrator | 2026-02-23 21:11:41 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:44.177343 | orchestrator | 2026-02-23 21:11:44 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:46.501924 | orchestrator | 2026-02-23 21:11:46 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:48.916131 | orchestrator | 2026-02-23 21:11:48 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:51.159191 | orchestrator | 2026-02-23 21:11:51 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:11:53.398881 | orchestrator | 2026-02-23 21:11:53 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) completed with status ACTIVE 2026-02-23 21:11:53.399066 | orchestrator | 2026-02-23 21:11:53 | INFO  | Live migrating server 95eb0896-2059-42bc-afda-7157fd138a76 2026-02-23 21:12:03.495394 | orchestrator | 2026-02-23 21:12:03 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:05.799902 | orchestrator | 2026-02-23 21:12:05 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:08.123363 | orchestrator | 2026-02-23 21:12:08 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:10.385195 | orchestrator | 2026-02-23 21:12:10 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:12.695504 | orchestrator | 2026-02-23 21:12:12 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:14.961506 | orchestrator | 2026-02-23 21:12:14 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:17.238579 | orchestrator | 2026-02-23 21:12:17 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:19.512518 | orchestrator | 2026-02-23 21:12:19 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:21.834510 | orchestrator | 2026-02-23 21:12:21 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:12:24.228640 | orchestrator | 2026-02-23 21:12:24 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) completed with status ACTIVE 2026-02-23 21:12:24.229618 | orchestrator | 2026-02-23 21:12:24 | INFO  | Live migrating server 9ea23385-fc20-43a9-adbc-02b35774482d 2026-02-23 21:12:36.190319 | orchestrator | 2026-02-23 21:12:36 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:38.513339 | orchestrator | 2026-02-23 21:12:38 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:40.782192 | orchestrator | 2026-02-23 21:12:40 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:43.132724 | orchestrator | 2026-02-23 21:12:43 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:45.397491 | orchestrator | 2026-02-23 21:12:45 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:47.646702 | orchestrator | 2026-02-23 21:12:47 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:49.894180 | orchestrator | 2026-02-23 21:12:49 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:52.175333 | orchestrator | 2026-02-23 21:12:52 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:12:54.506441 | orchestrator | 2026-02-23 21:12:54 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) completed with status ACTIVE 2026-02-23 21:12:54.506527 | orchestrator | 2026-02-23 21:12:54 | INFO  | Live migrating server 2b54588c-a537-47da-a043-7a20a09aefd8 2026-02-23 21:13:05.351676 | orchestrator | 2026-02-23 21:13:05 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:07.655204 | orchestrator | 2026-02-23 21:13:07 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:09.998422 | orchestrator | 2026-02-23 21:13:10 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:12.297644 | orchestrator | 2026-02-23 21:13:12 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:14.513168 | orchestrator | 2026-02-23 21:13:14 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:16.751596 | orchestrator | 2026-02-23 21:13:16 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:19.032035 | orchestrator | 2026-02-23 21:13:19 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:21.293899 | orchestrator | 2026-02-23 21:13:21 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:13:23.551759 | orchestrator | 2026-02-23 21:13:23 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) completed with status ACTIVE 2026-02-23 21:13:23.551845 | orchestrator | 2026-02-23 21:13:23 | INFO  | Live migrating server 8706d052-6ae2-4461-a5fe-5d9008721ccb 2026-02-23 21:13:35.257538 | orchestrator | 2026-02-23 21:13:35 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:37.586541 | orchestrator | 2026-02-23 21:13:37 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:39.908245 | orchestrator | 2026-02-23 21:13:39 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:42.184043 | orchestrator | 2026-02-23 21:13:42 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:44.535553 | orchestrator | 2026-02-23 21:13:44 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:46.787002 | orchestrator | 2026-02-23 21:13:46 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:49.087094 | orchestrator | 2026-02-23 21:13:49 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:51.333571 | orchestrator | 2026-02-23 21:13:51 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:53.593595 | orchestrator | 2026-02-23 21:13:53 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:55.860858 | orchestrator | 2026-02-23 21:13:55 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:13:58.106945 | orchestrator | 2026-02-23 21:13:58 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) completed with status ACTIVE 2026-02-23 21:13:58.443422 | orchestrator | + compute_list 2026-02-23 21:13:58.443483 | orchestrator | + osism manage compute list testbed-node-3 2026-02-23 21:14:00.496036 | orchestrator | 2026-02-23 21:14:00 | ERROR  | Unable to get ansible vault password 2026-02-23 21:14:00.496120 | orchestrator | 2026-02-23 21:14:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:14:00.496133 | orchestrator | 2026-02-23 21:14:00 | ERROR  | Dropping encrypted entries 2026-02-23 21:14:01.295921 | orchestrator | +------+--------+----------+ 2026-02-23 21:14:01.296010 | orchestrator | | ID | Name | Status | 2026-02-23 21:14:01.296020 | orchestrator | |------+--------+----------| 2026-02-23 21:14:01.296026 | orchestrator | +------+--------+----------+ 2026-02-23 21:14:01.620059 | orchestrator | + osism manage compute list testbed-node-4 2026-02-23 21:14:03.658791 | orchestrator | 2026-02-23 21:14:03 | ERROR  | Unable to get ansible vault password 2026-02-23 21:14:03.658877 | orchestrator | 2026-02-23 21:14:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:14:03.658889 | orchestrator | 2026-02-23 21:14:03 | ERROR  | Dropping encrypted entries 2026-02-23 21:14:04.808892 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:14:04.808955 | orchestrator | | ID | Name | Status | 2026-02-23 21:14:04.808964 | orchestrator | |--------------------------------------+--------+----------| 2026-02-23 21:14:04.808971 | orchestrator | | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | test-4 | ACTIVE | 2026-02-23 21:14:04.808978 | orchestrator | | 95eb0896-2059-42bc-afda-7157fd138a76 | test-3 | ACTIVE | 2026-02-23 21:14:04.808985 | orchestrator | | 9ea23385-fc20-43a9-adbc-02b35774482d | test-2 | ACTIVE | 2026-02-23 21:14:04.808993 | orchestrator | | 2b54588c-a537-47da-a043-7a20a09aefd8 | test-1 | ACTIVE | 2026-02-23 21:14:04.808999 | orchestrator | | 8706d052-6ae2-4461-a5fe-5d9008721ccb | test | ACTIVE | 2026-02-23 21:14:04.809006 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:14:05.141298 | orchestrator | + osism manage compute list testbed-node-5 2026-02-23 21:14:07.152566 | orchestrator | 2026-02-23 21:14:07 | ERROR  | Unable to get ansible vault password 2026-02-23 21:14:07.152665 | orchestrator | 2026-02-23 21:14:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:14:07.152735 | orchestrator | 2026-02-23 21:14:07 | ERROR  | Dropping encrypted entries 2026-02-23 21:14:07.897386 | orchestrator | +------+--------+----------+ 2026-02-23 21:14:07.897463 | orchestrator | | ID | Name | Status | 2026-02-23 21:14:07.897472 | orchestrator | |------+--------+----------| 2026-02-23 21:14:07.897479 | orchestrator | +------+--------+----------+ 2026-02-23 21:14:08.209546 | orchestrator | + server_ping 2026-02-23 21:14:08.209952 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-23 21:14:08.210220 | orchestrator | ++ tr -d '\r' 2026-02-23 21:14:10.882237 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:14:10.882309 | orchestrator | + ping -c3 192.168.112.100 2026-02-23 21:14:10.890162 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-02-23 21:14:10.890229 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=5.85 ms 2026-02-23 21:14:11.887383 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.02 ms 2026-02-23 21:14:12.888826 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.69 ms 2026-02-23 21:14:12.888933 | orchestrator | 2026-02-23 21:14:12.888950 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-02-23 21:14:12.888964 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-23 21:14:12.888976 | orchestrator | rtt min/avg/max/mdev = 1.693/3.188/5.851/1.887 ms 2026-02-23 21:14:12.888989 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:14:12.889006 | orchestrator | + ping -c3 192.168.112.192 2026-02-23 21:14:12.903032 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-02-23 21:14:12.903126 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=10.2 ms 2026-02-23 21:14:13.896467 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.16 ms 2026-02-23 21:14:14.897747 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.67 ms 2026-02-23 21:14:14.897833 | orchestrator | 2026-02-23 21:14:14.897840 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-02-23 21:14:14.897845 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:14:14.897850 | orchestrator | rtt min/avg/max/mdev = 1.665/4.678/10.214/3.919 ms 2026-02-23 21:14:14.897855 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:14:14.897860 | orchestrator | + ping -c3 192.168.112.166 2026-02-23 21:14:14.907205 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-02-23 21:14:14.907280 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.24 ms 2026-02-23 21:14:15.906111 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.54 ms 2026-02-23 21:14:16.907696 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.86 ms 2026-02-23 21:14:16.907784 | orchestrator | 2026-02-23 21:14:16.907792 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-02-23 21:14:16.907798 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:14:16.907803 | orchestrator | rtt min/avg/max/mdev = 1.856/3.213/5.242/1.461 ms 2026-02-23 21:14:16.907808 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:14:16.907813 | orchestrator | + ping -c3 192.168.112.172 2026-02-23 21:14:16.919377 | orchestrator | PING 192.168.112.172 (192.168.112.172) 56(84) bytes of data. 2026-02-23 21:14:16.919457 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=1 ttl=63 time=7.90 ms 2026-02-23 21:14:17.914834 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=2 ttl=63 time=2.26 ms 2026-02-23 21:14:18.916365 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=3 ttl=63 time=1.73 ms 2026-02-23 21:14:18.916444 | orchestrator | 2026-02-23 21:14:18.916457 | orchestrator | --- 192.168.112.172 ping statistics --- 2026-02-23 21:14:18.916468 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:14:18.916479 | orchestrator | rtt min/avg/max/mdev = 1.732/3.963/7.898/2.790 ms 2026-02-23 21:14:18.916986 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:14:18.917048 | orchestrator | + ping -c3 192.168.112.161 2026-02-23 21:14:18.928172 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-02-23 21:14:18.928250 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=6.59 ms 2026-02-23 21:14:19.925191 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=1.92 ms 2026-02-23 21:14:20.926494 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.56 ms 2026-02-23 21:14:20.926581 | orchestrator | 2026-02-23 21:14:20.926590 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-02-23 21:14:20.926598 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:14:20.926605 | orchestrator | rtt min/avg/max/mdev = 1.561/3.358/6.592/2.291 ms 2026-02-23 21:14:20.926611 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-02-23 21:14:22.913123 | orchestrator | 2026-02-23 21:14:22 | ERROR  | Unable to get ansible vault password 2026-02-23 21:14:22.913188 | orchestrator | 2026-02-23 21:14:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:14:22.913196 | orchestrator | 2026-02-23 21:14:22 | ERROR  | Dropping encrypted entries 2026-02-23 21:14:24.227280 | orchestrator | 2026-02-23 21:14:24 | INFO  | Live migrating server d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 2026-02-23 21:14:34.896037 | orchestrator | 2026-02-23 21:14:34 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:37.231436 | orchestrator | 2026-02-23 21:14:37 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:39.567488 | orchestrator | 2026-02-23 21:14:39 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:41.821561 | orchestrator | 2026-02-23 21:14:41 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:44.138718 | orchestrator | 2026-02-23 21:14:44 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:46.462778 | orchestrator | 2026-02-23 21:14:46 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:48.773879 | orchestrator | 2026-02-23 21:14:48 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:51.102867 | orchestrator | 2026-02-23 21:14:51 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:53.461490 | orchestrator | 2026-02-23 21:14:53 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:55.705065 | orchestrator | 2026-02-23 21:14:55 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:14:58.028152 | orchestrator | 2026-02-23 21:14:58 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) is still in progress 2026-02-23 21:15:00.404808 | orchestrator | 2026-02-23 21:15:00 | INFO  | Live migration of d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 (test-4) completed with status ACTIVE 2026-02-23 21:15:00.404970 | orchestrator | 2026-02-23 21:15:00 | INFO  | Live migrating server 95eb0896-2059-42bc-afda-7157fd138a76 2026-02-23 21:15:09.996572 | orchestrator | 2026-02-23 21:15:09 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:12.343670 | orchestrator | 2026-02-23 21:15:12 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:14.592113 | orchestrator | 2026-02-23 21:15:14 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:16.865809 | orchestrator | 2026-02-23 21:15:16 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:19.126173 | orchestrator | 2026-02-23 21:15:19 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:21.367012 | orchestrator | 2026-02-23 21:15:21 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:23.644944 | orchestrator | 2026-02-23 21:15:23 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:25.893354 | orchestrator | 2026-02-23 21:15:25 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) is still in progress 2026-02-23 21:15:28.179654 | orchestrator | 2026-02-23 21:15:28 | INFO  | Live migration of 95eb0896-2059-42bc-afda-7157fd138a76 (test-3) completed with status ACTIVE 2026-02-23 21:15:28.179727 | orchestrator | 2026-02-23 21:15:28 | INFO  | Live migrating server 9ea23385-fc20-43a9-adbc-02b35774482d 2026-02-23 21:15:37.314369 | orchestrator | 2026-02-23 21:15:37 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:39.688773 | orchestrator | 2026-02-23 21:15:39 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:41.990082 | orchestrator | 2026-02-23 21:15:41 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:44.190392 | orchestrator | 2026-02-23 21:15:44 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:46.397932 | orchestrator | 2026-02-23 21:15:46 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:48.594365 | orchestrator | 2026-02-23 21:15:48 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:50.832720 | orchestrator | 2026-02-23 21:15:50 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:53.054102 | orchestrator | 2026-02-23 21:15:53 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) is still in progress 2026-02-23 21:15:55.333902 | orchestrator | 2026-02-23 21:15:55 | INFO  | Live migration of 9ea23385-fc20-43a9-adbc-02b35774482d (test-2) completed with status ACTIVE 2026-02-23 21:15:55.334922 | orchestrator | 2026-02-23 21:15:55 | INFO  | Live migrating server 2b54588c-a537-47da-a043-7a20a09aefd8 2026-02-23 21:16:05.349882 | orchestrator | 2026-02-23 21:16:05 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:07.663861 | orchestrator | 2026-02-23 21:16:07 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:09.991258 | orchestrator | 2026-02-23 21:16:09 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:12.262154 | orchestrator | 2026-02-23 21:16:12 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:14.600803 | orchestrator | 2026-02-23 21:16:14 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:16.881586 | orchestrator | 2026-02-23 21:16:16 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:19.195210 | orchestrator | 2026-02-23 21:16:19 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:21.519137 | orchestrator | 2026-02-23 21:16:21 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) is still in progress 2026-02-23 21:16:23.852662 | orchestrator | 2026-02-23 21:16:23 | INFO  | Live migration of 2b54588c-a537-47da-a043-7a20a09aefd8 (test-1) completed with status ACTIVE 2026-02-23 21:16:23.852745 | orchestrator | 2026-02-23 21:16:23 | INFO  | Live migrating server 8706d052-6ae2-4461-a5fe-5d9008721ccb 2026-02-23 21:16:33.540235 | orchestrator | 2026-02-23 21:16:33 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:35.868544 | orchestrator | 2026-02-23 21:16:35 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:38.229196 | orchestrator | 2026-02-23 21:16:38 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:40.674992 | orchestrator | 2026-02-23 21:16:40 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:43.023311 | orchestrator | 2026-02-23 21:16:43 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:45.282187 | orchestrator | 2026-02-23 21:16:45 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:47.740704 | orchestrator | 2026-02-23 21:16:47 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:49.972989 | orchestrator | 2026-02-23 21:16:49 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:52.399664 | orchestrator | 2026-02-23 21:16:52 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) is still in progress 2026-02-23 21:16:54.717233 | orchestrator | 2026-02-23 21:16:54 | INFO  | Live migration of 8706d052-6ae2-4461-a5fe-5d9008721ccb (test) completed with status ACTIVE 2026-02-23 21:16:54.940230 | orchestrator | + compute_list 2026-02-23 21:16:54.940326 | orchestrator | + osism manage compute list testbed-node-3 2026-02-23 21:16:56.907058 | orchestrator | 2026-02-23 21:16:56 | ERROR  | Unable to get ansible vault password 2026-02-23 21:16:56.907139 | orchestrator | 2026-02-23 21:16:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:16:56.907147 | orchestrator | 2026-02-23 21:16:56 | ERROR  | Dropping encrypted entries 2026-02-23 21:16:57.847409 | orchestrator | +------+--------+----------+ 2026-02-23 21:16:57.847525 | orchestrator | | ID | Name | Status | 2026-02-23 21:16:57.847534 | orchestrator | |------+--------+----------| 2026-02-23 21:16:57.847540 | orchestrator | +------+--------+----------+ 2026-02-23 21:16:58.273698 | orchestrator | + osism manage compute list testbed-node-4 2026-02-23 21:17:00.158168 | orchestrator | 2026-02-23 21:17:00 | ERROR  | Unable to get ansible vault password 2026-02-23 21:17:00.158297 | orchestrator | 2026-02-23 21:17:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:17:00.158312 | orchestrator | 2026-02-23 21:17:00 | ERROR  | Dropping encrypted entries 2026-02-23 21:17:00.982406 | orchestrator | +------+--------+----------+ 2026-02-23 21:17:00.982507 | orchestrator | | ID | Name | Status | 2026-02-23 21:17:00.982514 | orchestrator | |------+--------+----------| 2026-02-23 21:17:00.982519 | orchestrator | +------+--------+----------+ 2026-02-23 21:17:01.181528 | orchestrator | + osism manage compute list testbed-node-5 2026-02-23 21:17:02.913044 | orchestrator | 2026-02-23 21:17:02 | ERROR  | Unable to get ansible vault password 2026-02-23 21:17:02.913138 | orchestrator | 2026-02-23 21:17:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-23 21:17:02.913147 | orchestrator | 2026-02-23 21:17:02 | ERROR  | Dropping encrypted entries 2026-02-23 21:17:04.135290 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:17:04.135353 | orchestrator | | ID | Name | Status | 2026-02-23 21:17:04.135359 | orchestrator | |--------------------------------------+--------+----------| 2026-02-23 21:17:04.135364 | orchestrator | | d76ebd24-2d1d-4cb1-b8d7-f24bb18d4e07 | test-4 | ACTIVE | 2026-02-23 21:17:04.135368 | orchestrator | | 95eb0896-2059-42bc-afda-7157fd138a76 | test-3 | ACTIVE | 2026-02-23 21:17:04.135372 | orchestrator | | 9ea23385-fc20-43a9-adbc-02b35774482d | test-2 | ACTIVE | 2026-02-23 21:17:04.135377 | orchestrator | | 2b54588c-a537-47da-a043-7a20a09aefd8 | test-1 | ACTIVE | 2026-02-23 21:17:04.135381 | orchestrator | | 8706d052-6ae2-4461-a5fe-5d9008721ccb | test | ACTIVE | 2026-02-23 21:17:04.135385 | orchestrator | +--------------------------------------+--------+----------+ 2026-02-23 21:17:04.343697 | orchestrator | + server_ping 2026-02-23 21:17:04.344536 | orchestrator | ++ tr -d '\r' 2026-02-23 21:17:04.344598 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-23 21:17:07.275691 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:17:07.275782 | orchestrator | + ping -c3 192.168.112.100 2026-02-23 21:17:07.282260 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-02-23 21:17:07.282331 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=4.87 ms 2026-02-23 21:17:08.282131 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.41 ms 2026-02-23 21:17:09.283020 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.77 ms 2026-02-23 21:17:09.283103 | orchestrator | 2026-02-23 21:17:09.283113 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-02-23 21:17:09.283121 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-23 21:17:09.283128 | orchestrator | rtt min/avg/max/mdev = 1.770/3.016/4.870/1.336 ms 2026-02-23 21:17:09.283525 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:17:09.283550 | orchestrator | + ping -c3 192.168.112.192 2026-02-23 21:17:09.295093 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-02-23 21:17:09.295158 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.93 ms 2026-02-23 21:17:10.291869 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.46 ms 2026-02-23 21:17:11.293518 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.98 ms 2026-02-23 21:17:11.293597 | orchestrator | 2026-02-23 21:17:11.293608 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-02-23 21:17:11.293617 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:17:11.293624 | orchestrator | rtt min/avg/max/mdev = 1.975/3.786/6.926/2.228 ms 2026-02-23 21:17:11.294101 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:17:11.294120 | orchestrator | + ping -c3 192.168.112.166 2026-02-23 21:17:11.303848 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-02-23 21:17:11.303922 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.79 ms 2026-02-23 21:17:12.301732 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=1.62 ms 2026-02-23 21:17:13.303677 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.60 ms 2026-02-23 21:17:13.304232 | orchestrator | 2026-02-23 21:17:13.304262 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-02-23 21:17:13.304271 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-23 21:17:13.304279 | orchestrator | rtt min/avg/max/mdev = 1.598/3.000/5.786/1.970 ms 2026-02-23 21:17:13.304539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:17:13.304565 | orchestrator | + ping -c3 192.168.112.172 2026-02-23 21:17:13.314351 | orchestrator | PING 192.168.112.172 (192.168.112.172) 56(84) bytes of data. 2026-02-23 21:17:13.314394 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=1 ttl=63 time=5.28 ms 2026-02-23 21:17:14.312556 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=2 ttl=63 time=1.81 ms 2026-02-23 21:17:15.314834 | orchestrator | 64 bytes from 192.168.112.172: icmp_seq=3 ttl=63 time=1.42 ms 2026-02-23 21:17:15.314899 | orchestrator | 2026-02-23 21:17:15.314911 | orchestrator | --- 192.168.112.172 ping statistics --- 2026-02-23 21:17:15.314919 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-23 21:17:15.314929 | orchestrator | rtt min/avg/max/mdev = 1.417/2.835/5.280/1.735 ms 2026-02-23 21:17:15.314937 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-23 21:17:15.314946 | orchestrator | + ping -c3 192.168.112.161 2026-02-23 21:17:15.323164 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-02-23 21:17:15.323237 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=3.74 ms 2026-02-23 21:17:16.323656 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.08 ms 2026-02-23 21:17:17.324932 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.92 ms 2026-02-23 21:17:17.325014 | orchestrator | 2026-02-23 21:17:17.325023 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-02-23 21:17:17.325031 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-23 21:17:17.325037 | orchestrator | rtt min/avg/max/mdev = 1.916/2.578/3.737/0.822 ms 2026-02-23 21:17:17.644787 | orchestrator | ok: Runtime: 0:17:05.555644 2026-02-23 21:17:17.699034 | 2026-02-23 21:17:17.699168 | TASK [Run tempest] 2026-02-23 21:17:18.232815 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:18.252545 | 2026-02-23 21:17:18.252744 | TASK [Check prometheus alert status] 2026-02-23 21:17:18.791375 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:18.794484 | 2026-02-23 21:17:18.794686 | PLAY RECAP 2026-02-23 21:17:18.794961 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2026-02-23 21:17:18.795030 | 2026-02-23 21:17:19.025109 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-23 21:17:19.027387 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-23 21:17:19.760288 | 2026-02-23 21:17:19.760448 | PLAY [Post output play] 2026-02-23 21:17:19.775547 | 2026-02-23 21:17:19.775709 | LOOP [stage-output : Register sources] 2026-02-23 21:17:19.846416 | 2026-02-23 21:17:19.846742 | TASK [stage-output : Check sudo] 2026-02-23 21:17:20.721581 | orchestrator | sudo: a password is required 2026-02-23 21:17:20.885031 | orchestrator | ok: Runtime: 0:00:00.022943 2026-02-23 21:17:20.898369 | 2026-02-23 21:17:20.898519 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-23 21:17:20.933102 | 2026-02-23 21:17:20.933308 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-23 21:17:20.999615 | orchestrator | ok 2026-02-23 21:17:21.008566 | 2026-02-23 21:17:21.008748 | LOOP [stage-output : Ensure target folders exist] 2026-02-23 21:17:21.539701 | orchestrator | ok: "docs" 2026-02-23 21:17:21.540040 | 2026-02-23 21:17:21.837022 | orchestrator | ok: "artifacts" 2026-02-23 21:17:22.183024 | orchestrator | ok: "logs" 2026-02-23 21:17:22.197205 | 2026-02-23 21:17:22.197340 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-23 21:17:22.228835 | 2026-02-23 21:17:22.229032 | TASK [stage-output : Make all log files readable] 2026-02-23 21:17:22.570186 | orchestrator | ok 2026-02-23 21:17:22.578980 | 2026-02-23 21:17:22.579117 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-23 21:17:22.613448 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:22.625229 | 2026-02-23 21:17:22.625373 | TASK [stage-output : Discover log files for compression] 2026-02-23 21:17:22.649856 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:22.672139 | 2026-02-23 21:17:22.672347 | LOOP [stage-output : Archive everything from logs] 2026-02-23 21:17:22.719407 | 2026-02-23 21:17:22.719592 | PLAY [Post cleanup play] 2026-02-23 21:17:22.728849 | 2026-02-23 21:17:22.728959 | TASK [Set cloud fact (Zuul deployment)] 2026-02-23 21:17:22.787910 | orchestrator | ok 2026-02-23 21:17:22.797676 | 2026-02-23 21:17:22.797805 | TASK [Set cloud fact (local deployment)] 2026-02-23 21:17:22.832433 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:22.849816 | 2026-02-23 21:17:22.849984 | TASK [Clean the cloud environment] 2026-02-23 21:17:24.579931 | orchestrator | 2026-02-23 21:17:24 - clean up servers 2026-02-23 21:17:25.366143 | orchestrator | 2026-02-23 21:17:25 - testbed-manager 2026-02-23 21:17:25.444985 | orchestrator | 2026-02-23 21:17:25 - testbed-node-4 2026-02-23 21:17:25.534502 | orchestrator | 2026-02-23 21:17:25 - testbed-node-2 2026-02-23 21:17:25.616847 | orchestrator | 2026-02-23 21:17:25 - testbed-node-3 2026-02-23 21:17:25.704345 | orchestrator | 2026-02-23 21:17:25 - testbed-node-0 2026-02-23 21:17:25.789693 | orchestrator | 2026-02-23 21:17:25 - testbed-node-1 2026-02-23 21:17:25.874063 | orchestrator | 2026-02-23 21:17:25 - testbed-node-5 2026-02-23 21:17:25.967403 | orchestrator | 2026-02-23 21:17:25 - clean up keypairs 2026-02-23 21:17:25.981947 | orchestrator | 2026-02-23 21:17:25 - testbed 2026-02-23 21:17:26.002717 | orchestrator | 2026-02-23 21:17:26 - wait for servers to be gone 2026-02-23 21:17:36.817544 | orchestrator | 2026-02-23 21:17:36 - clean up ports 2026-02-23 21:17:37.004449 | orchestrator | 2026-02-23 21:17:37 - 09aba50d-d76f-407d-b8ed-f20cf479155e 2026-02-23 21:17:37.278294 | orchestrator | 2026-02-23 21:17:37 - 25fccad9-e381-49a2-8694-adf4bc97a2a7 2026-02-23 21:17:37.515396 | orchestrator | 2026-02-23 21:17:37 - 346ad253-dd0b-437f-82ac-9875717212db 2026-02-23 21:17:37.745419 | orchestrator | 2026-02-23 21:17:37 - 4ad9e5c8-0422-4eb4-a7b0-9db75bdd313c 2026-02-23 21:17:37.956373 | orchestrator | 2026-02-23 21:17:37 - 8b1c8f26-ef6c-4569-9b39-148effeaf5ea 2026-02-23 21:17:38.169543 | orchestrator | 2026-02-23 21:17:38 - 96972253-9e53-4977-8a59-bbfdfa5aaea5 2026-02-23 21:17:38.373623 | orchestrator | 2026-02-23 21:17:38 - f9ca4ad9-7ccf-4af2-802a-09b65db302ed 2026-02-23 21:17:38.830470 | orchestrator | 2026-02-23 21:17:38 - clean up volumes 2026-02-23 21:17:38.944429 | orchestrator | 2026-02-23 21:17:38 - testbed-volume-2-node-base 2026-02-23 21:17:38.983643 | orchestrator | 2026-02-23 21:17:38 - testbed-volume-5-node-base 2026-02-23 21:17:39.023438 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-1-node-base 2026-02-23 21:17:39.062932 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-3-node-base 2026-02-23 21:17:39.106284 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-0-node-base 2026-02-23 21:17:39.148222 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-4-node-base 2026-02-23 21:17:39.190952 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-manager-base 2026-02-23 21:17:39.231198 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-0-node-3 2026-02-23 21:17:39.275131 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-2-node-5 2026-02-23 21:17:39.317103 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-3-node-3 2026-02-23 21:17:39.359320 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-4-node-4 2026-02-23 21:17:39.406863 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-6-node-3 2026-02-23 21:17:39.445455 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-7-node-4 2026-02-23 21:17:39.485985 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-1-node-4 2026-02-23 21:17:39.543711 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-8-node-5 2026-02-23 21:17:39.584797 | orchestrator | 2026-02-23 21:17:39 - testbed-volume-5-node-5 2026-02-23 21:17:39.625047 | orchestrator | 2026-02-23 21:17:39 - disconnect routers 2026-02-23 21:17:39.702564 | orchestrator | 2026-02-23 21:17:39 - testbed 2026-02-23 21:17:40.605914 | orchestrator | 2026-02-23 21:17:40 - clean up subnets 2026-02-23 21:17:40.644745 | orchestrator | 2026-02-23 21:17:40 - subnet-testbed-management 2026-02-23 21:17:40.817137 | orchestrator | 2026-02-23 21:17:40 - clean up networks 2026-02-23 21:17:40.993880 | orchestrator | 2026-02-23 21:17:40 - net-testbed-management 2026-02-23 21:17:41.299793 | orchestrator | 2026-02-23 21:17:41 - clean up security groups 2026-02-23 21:17:41.334573 | orchestrator | 2026-02-23 21:17:41 - testbed-node 2026-02-23 21:17:41.445691 | orchestrator | 2026-02-23 21:17:41 - testbed-management 2026-02-23 21:17:41.567577 | orchestrator | 2026-02-23 21:17:41 - clean up floating ips 2026-02-23 21:17:41.598832 | orchestrator | 2026-02-23 21:17:41 - 81.163.193.96 2026-02-23 21:17:41.969455 | orchestrator | 2026-02-23 21:17:41 - clean up routers 2026-02-23 21:17:42.029963 | orchestrator | 2026-02-23 21:17:42 - testbed 2026-02-23 21:17:42.962928 | orchestrator | ok: Runtime: 0:00:19.711915 2026-02-23 21:17:42.965348 | 2026-02-23 21:17:42.965454 | PLAY RECAP 2026-02-23 21:17:42.965526 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-23 21:17:42.965561 | 2026-02-23 21:17:43.098481 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-23 21:17:43.101098 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-23 21:17:43.827233 | 2026-02-23 21:17:43.827392 | PLAY [Cleanup play] 2026-02-23 21:17:43.843800 | 2026-02-23 21:17:43.843941 | TASK [Set cloud fact (Zuul deployment)] 2026-02-23 21:17:43.913048 | orchestrator | ok 2026-02-23 21:17:43.922969 | 2026-02-23 21:17:43.923125 | TASK [Set cloud fact (local deployment)] 2026-02-23 21:17:43.959208 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:43.977307 | 2026-02-23 21:17:43.977472 | TASK [Clean the cloud environment] 2026-02-23 21:17:45.195837 | orchestrator | 2026-02-23 21:17:45 - clean up servers 2026-02-23 21:17:45.670445 | orchestrator | 2026-02-23 21:17:45 - clean up keypairs 2026-02-23 21:17:45.685239 | orchestrator | 2026-02-23 21:17:45 - wait for servers to be gone 2026-02-23 21:17:45.724935 | orchestrator | 2026-02-23 21:17:45 - clean up ports 2026-02-23 21:17:45.821099 | orchestrator | 2026-02-23 21:17:45 - clean up volumes 2026-02-23 21:17:45.904366 | orchestrator | 2026-02-23 21:17:45 - disconnect routers 2026-02-23 21:17:45.929858 | orchestrator | 2026-02-23 21:17:45 - clean up subnets 2026-02-23 21:17:45.952987 | orchestrator | 2026-02-23 21:17:45 - clean up networks 2026-02-23 21:17:46.149725 | orchestrator | 2026-02-23 21:17:46 - clean up security groups 2026-02-23 21:17:46.186364 | orchestrator | 2026-02-23 21:17:46 - clean up floating ips 2026-02-23 21:17:46.208647 | orchestrator | 2026-02-23 21:17:46 - clean up routers 2026-02-23 21:17:46.516898 | orchestrator | ok: Runtime: 0:00:01.503932 2026-02-23 21:17:46.522015 | 2026-02-23 21:17:46.522205 | PLAY RECAP 2026-02-23 21:17:46.522350 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-23 21:17:46.522422 | 2026-02-23 21:17:46.650626 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-23 21:17:46.652966 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-23 21:17:47.482204 | 2026-02-23 21:17:47.482361 | PLAY [Base post-fetch] 2026-02-23 21:17:47.496767 | 2026-02-23 21:17:47.496893 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-23 21:17:47.562763 | orchestrator | skipping: Conditional result was False 2026-02-23 21:17:47.576636 | 2026-02-23 21:17:47.576848 | TASK [fetch-output : Set log path for single node] 2026-02-23 21:17:47.623779 | orchestrator | ok 2026-02-23 21:17:47.633483 | 2026-02-23 21:17:47.633634 | LOOP [fetch-output : Ensure local output dirs] 2026-02-23 21:17:48.109873 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/work/logs" 2026-02-23 21:17:48.362756 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/work/artifacts" 2026-02-23 21:17:48.638117 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/66cbaed88017496cb520464d388d0f6f/work/docs" 2026-02-23 21:17:48.660953 | 2026-02-23 21:17:48.661122 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-23 21:17:49.591360 | orchestrator | changed: .d..t...... ./ 2026-02-23 21:17:49.591623 | orchestrator | changed: All items complete 2026-02-23 21:17:49.591662 | 2026-02-23 21:17:50.287524 | orchestrator | changed: .d..t...... ./ 2026-02-23 21:17:50.981121 | orchestrator | changed: .d..t...... ./ 2026-02-23 21:17:51.007516 | 2026-02-23 21:17:51.007711 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-23 21:17:51.499269 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.009475 2026-02-23 21:17:51.777073 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.010951 2026-02-23 21:17:51.803365 | 2026-02-23 21:17:51.803502 | PLAY RECAP 2026-02-23 21:17:51.803669 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-23 21:17:51.803740 | 2026-02-23 21:17:51.937836 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-23 21:17:51.940241 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-23 21:17:52.712491 | 2026-02-23 21:17:52.712676 | PLAY [Base post] 2026-02-23 21:17:52.727523 | 2026-02-23 21:17:52.727685 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-23 21:17:53.786731 | orchestrator | changed 2026-02-23 21:17:53.796495 | 2026-02-23 21:17:53.796673 | PLAY RECAP 2026-02-23 21:17:53.796754 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-23 21:17:53.796834 | 2026-02-23 21:17:53.927303 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-23 21:17:53.928353 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-23 21:17:54.750531 | 2026-02-23 21:17:54.750729 | PLAY [Base post-logs] 2026-02-23 21:17:54.761799 | 2026-02-23 21:17:54.761941 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-23 21:17:55.224010 | localhost | changed 2026-02-23 21:17:55.234248 | 2026-02-23 21:17:55.234405 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-23 21:17:55.270397 | localhost | ok 2026-02-23 21:17:55.274239 | 2026-02-23 21:17:55.274358 | TASK [Set zuul-log-path fact] 2026-02-23 21:17:55.290568 | localhost | ok 2026-02-23 21:17:55.301679 | 2026-02-23 21:17:55.301793 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-23 21:17:55.329229 | localhost | ok 2026-02-23 21:17:55.336181 | 2026-02-23 21:17:55.336359 | TASK [upload-logs : Create log directories] 2026-02-23 21:17:55.833574 | localhost | changed 2026-02-23 21:17:55.839931 | 2026-02-23 21:17:55.840098 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-23 21:17:56.362235 | localhost -> localhost | ok: Runtime: 0:00:00.006850 2026-02-23 21:17:56.370242 | 2026-02-23 21:17:56.370423 | TASK [upload-logs : Upload logs to log server] 2026-02-23 21:17:56.959033 | localhost | Output suppressed because no_log was given 2026-02-23 21:17:56.962988 | 2026-02-23 21:17:56.963167 | LOOP [upload-logs : Compress console log and json output] 2026-02-23 21:17:57.019200 | localhost | skipping: Conditional result was False 2026-02-23 21:17:57.023923 | localhost | skipping: Conditional result was False 2026-02-23 21:17:57.036049 | 2026-02-23 21:17:57.036269 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-23 21:17:57.084722 | localhost | skipping: Conditional result was False 2026-02-23 21:17:57.085334 | 2026-02-23 21:17:57.088650 | localhost | skipping: Conditional result was False 2026-02-23 21:17:57.102763 | 2026-02-23 21:17:57.103025 | LOOP [upload-logs : Upload console log and json output]